Add news
March 2010 April 2010 May 2010 June 2010 July 2010
August 2010
September 2010 October 2010
November 2010
December 2010
January 2011
February 2011 March 2011 April 2011 May 2011 June 2011 July 2011 August 2011 September 2011 October 2011 November 2011 December 2011 January 2012 February 2012 March 2012 April 2012 May 2012 June 2012 July 2012 August 2012 September 2012 October 2012 November 2012 December 2012 January 2013 February 2013 March 2013 April 2013 May 2013 June 2013 July 2013 August 2013 September 2013 October 2013 November 2013 December 2013 January 2014 February 2014 March 2014 April 2014 May 2014 June 2014 July 2014 August 2014 September 2014 October 2014 November 2014 December 2014 January 2015 February 2015 March 2015 April 2015 May 2015 June 2015 July 2015 August 2015 September 2015 October 2015 November 2015 December 2015 January 2016 February 2016 March 2016 April 2016 May 2016 June 2016 July 2016 August 2016 September 2016 October 2016 November 2016 December 2016 January 2017 February 2017 March 2017 April 2017 May 2017 June 2017 July 2017 August 2017 September 2017 October 2017 November 2017 December 2017 January 2018 February 2018 March 2018 April 2018 May 2018 June 2018 July 2018 August 2018 September 2018 October 2018 November 2018 December 2018 January 2019 February 2019 March 2019 April 2019 May 2019 June 2019 July 2019 August 2019 September 2019 October 2019 November 2019 December 2019 January 2020 February 2020 March 2020 April 2020 May 2020 June 2020 July 2020 August 2020 September 2020 October 2020 November 2020 December 2020 January 2021 February 2021 March 2021 April 2021 May 2021 June 2021 July 2021 August 2021 September 2021 October 2021 November 2021 December 2021 January 2022 February 2022 March 2022 April 2022 May 2022 June 2022 July 2022 August 2022 September 2022 October 2022 November 2022 December 2022 January 2023 February 2023 March 2023 April 2023 May 2023 June 2023 July 2023 August 2023 September 2023 October 2023 November 2023 December 2023 January 2024 February 2024 March 2024 April 2024 May 2024 June 2024 July 2024
1 2 3 4 5 6 7 8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
News Every Day |

AI lie detectors are better than humans at spotting lies

This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. 

Can you spot a liar? It’s a question I imagine has been on a lot of minds lately, in the wake of various televised political debates. Research has shown that we’re generally pretty bad at telling a truth from a lie.
 
Some believe that AI could help improve our odds, and do better than dodgy old fashioned techniques like polygraph tests. AI-based lie detection systems could one day be used to help us sift fact from fake news, evaluate claims, and potentially even spot fibs and exaggerations in job applications. The question is whether we will trust them. And if we should.

In a recent study, Alicia von Schenk and her colleagues developed a tool that was significantly better than people at spotting lies. Von Schenk, an economist at the University of Würzburg in Germany, and her team then ran some experiments to find out how people used it. In some ways, the tool was helpful—the people who made use of it were better at spotting lies. But they also led people to make a lot more accusations.

In their study published in the journal iScience, von Schenk and her colleagues asked volunteers to write statements about their weekend plans. Half the time, people were incentivized to lie; a believable yet untrue statement was rewarded with a small financial payout. In total, the team collected 1,536 statements from 768 people.
 
They then used 80% of these statements to train an algorithm on lies and truths, using Google’s AI language model BERT. When they tested the resulting tool on the final 20% of statements, they found it could successfully tell whether a statement was true or false 67% of the time. That’s significantly better than a typical human; we usually only get it right around half the time.
 
To find out how people might make use of AI to help them spot lies, von Schenk and her colleagues split 2,040 other volunteers into smaller groups and ran a series of tests.
 
One test revealed that when people are given the option to pay a small fee to use an AI tool that can help them detect lies—and earn financial rewards—they still aren’t all that keen on using it. Only a third of the volunteers given that option decided to use the AI tool, possibly because they’re skeptical of the technology, says von Schenk. (They might also be overly optimistic about their own lie-detection skills, she adds.)
 
But that one-third of people really put their trust in the technology. “When you make the active choice to rely on the technology, we see that people almost always follow the prediction of the AI… they rely very much on its predictions,” says von Schenk.

This reliance can shape our behavior. Normally, people tend to assume others are telling the truth. That was borne out in this study—even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such. But that changed when people chose to make use of the AI tool: the accusation rate rose to 58%.
 
In some ways, this is a good thing—these tools can help us spot more of the lies we come across in our lives, like the misinformation we might come across on social media.
 
But it’s not all good. It could also undermine trust, a fundamental aspect of human behavior that helps us form relationships. If the price of accurate judgements is the deterioration of social bonds, is it worth it?
 
And then there’s the question of accuracy. In their study, von Schenk and her colleagues were only interested in creating a tool that was better than humans at lie detection. That isn’t too difficult, given how terrible we are at it. But she also imagines a tool like hers being used to routinely assess the truthfulness of social media posts, or hunt for fake details in a job hunter’s resume or interview responses. In cases like these, it’s not enough for a technology to just be “better than human” if it’s going to be making more accusations. 
 
Would we be willing to accept an accuracy rate of 80%, where only four out of every five assessed statements would be correctly interpreted as true or false? Would even 99% accuracy suffice? I’m not sure.
 
It’s worth remembering the fallibility of historical lie detection techniques. The polygraph was designed to measure heart rate and other signs of “arousal” because it was thought some signs of stress were unique to liars. They’re not. And we’ve known that for a long time. That’s why lie detector results are generally not admissible in US court cases. Despite that, polygraph lie detector tests have endured in some settings, and have caused plenty of harm when they’ve been used to hurl accusations at people who fail them on reality TV shows.
 
Imperfect AI tools stand to have an even greater impact because they are so easy to scale, says von Schenk. You can only polygraph so many people in a day. The scope for AI lie detection is almost limitless by comparison.
 
“Given that we have so much fake news and disinformation spreading, there is a benefit to these technologies,” says von Schenk. “However, you really need to test them—you need to make sure they are substantially better than humans.” If an AI lie detector is generating a lot of accusations, we might be better off not using it at all, she says.


Now read the rest of The Checkup

Read more from MIT Technology Review’s archive

AI lie detectors have also been developed to look for facial patterns of movement and “microgestures” associated with deception. As Jake Bittle puts it: “the dream of a perfect lie detector just won’t die, especially when glossed over with the sheen of AI.”
 
On the other hand, AI is also being used to generate plenty of disinformation. As of October last year, generative AI was already being used in at least 16 countries to “sow doubt, smear opponents, or influence public debate,” as Tate Ryan-Mosley reported.
 
The way AI language models are developed can heavily influence the way that they work. As a result, these models have picked up different political biases, as my colleague Melissa Heikkilä covered last year.
 
AI, like social media, has the potential for good or ill. In both cases, the regulatory limits we place on these technologies will determine which way the sword falls, argue Nathan E. Sanders and Bruce Schneier.
 
Chatbot answers are all made up.
But there’s a tool that can give a reliability score to large language model outputs, helping users work out how trustworthy they are. Or, as Will Douglas Heaven put it in an article published a few months ago, a BS-o-meter for chatbots.

From around the web

Scientists, ethicists and legal experts in the UK have published a new set of guidelines for research on synthetic embryos, or, as they call them, “stem cell-based embryo models (SCBEMs).” There should be limits on how long they are grown in labs, and they should not be transferred into the uterus of a human or animal, the guideline states. They also note that, if, in future, these structures look like they might have the potential to develop into a fetus, we should stop calling them “models” and instead refer to them as “embryos.”

Antimicrobial resistance is already responsible for 700,000 deaths every year, and could claim 10 million lives per year by 2050. Overuse of broad spectrum antibiotics is partly to blame. Is it time to tax these drugs to limit demand? (International Journal of Industrial Organization)

Spaceflight can alter the human brain, reorganizing gray and white matter and causing the brain to shift upwards in the skull. We need to better understand these effects, and the impact of cosmic radiation on our brains, before we send people to Mars. (The Lancet Neurology)

The vagus nerve has become an unlikely star of social media, thanks to influencers who drum up the benefits of stimulating it. Unfortunately, the science doesn’t stack up. (New Scientist)

A hospital in Texas is set to become the first in the country to enable doctors to see their patients via hologram. Crescent Regional Hospital in Lancaster has installed Holobox—a system that projects a life-sized hologram of a doctor for patient consultations. (ABC News)

Sky Sports issue grovelling apology to Nottingham Forest after Gary Neville’s ‘Mafia gang’ outburst

Leeloo Dolls: Your Premier Destination for Realistic Sex Dolls

BBC-R: Innovation and Quality in Bearing Manufacturing

Leeloo Dolls UK: The Epitome of Realism and Diversity

Ria.city






Read also

Nicaragua blocks indigenous priest from returning to the country

Israel deepens its operation in Gaza City, as pockets of militancy confront the military

No absolute majority: France seeks way out of political 'fog' after far right defeat

News, articles, comments, with a minute-by-minute update, now on Today24.pro

News Every Day

An iconic Michael Jordan photo will be on Tyler Reddick's car as he races through Chicago

Today24.pro — latest news 24/7. You can add your news instantly now — here


News Every Day

Leeloo Dolls: Your Premier Destination for Realistic Sex Dolls



Sports today


Новости тенниса
Уимблдон

Хачанов не смог выйти в третий круг Уимблдона, проиграв 220-й ракетке мира



Спорт в России и мире
Москва

Росгвардейцы обеспечили безопасность на футбольном матче в Москве



All sports news today





Sports in Russia today

Москва

Росгвардейцы обеспечили безопасность на футбольном матче в Москве


Новости России

Game News

Original Fallout lead Tim Cain says the team planned for there to be 1000 Vaults, but he thinks there are actually far fewer in the setting


Russian.city


Здоровье

Венозная и капиллярная кровь: в чём разница и какой анализ точнее


Губернаторы России
Игорь Рыбаков

Совладелец «ТЕХНОНИКОЛЬ» Игорь Рыбаков запустил на Дальнем Востоке бизнес-клуб «Эквиум»


Менеджер модельных Показов для Платного участия в Дизайнерских Показах.

КАК АНТАГОНИСТ МОЖЕТ ИЗМЕНИТЬ МИР ИЛИ ИСТОРИЯ ОДНОГО МОСКОВСКОГО РЭПЕРА.

Более 500 самовольно занятых участков земли выявили в Подмосковье с начала года

Tele2 модернизировала сеть на транспортных узлах по всей России


К 85-летию народной артистки СССР Елены Образцовой

В Перми откроется кофейня Тимати Stars Coffee

Жанне Фриске могло бы исполниться 50 лет

Помощь в организации Презентации книги для Писателей и Поэтов в Книжных магазинах и других площадках


Медведев прошел в четвертьфинал Уимблдона благодаря травме Димитрова

«Это вообще анекдот». В России объяснили сенсационную победу Путинцевой над Швёнтек

ПКР поздравляет чемпионку Паралимпийских игр по настольному теннису Р. Чебанику с Юбилеем

Первую ракетку мира решили утешить после сенсации от казахстанки на Уимблдоне



СОТРУДНИКИ РОСГВАРДИИ ПО ГОРЯЧИМ СЛЕДАМ ЗАДЕРЖАЛИ ЖИТЕЛЯ САРАТОВСКОЙ ОБЛАСТИ, ПОДОЗРЕВАЕМОГО В КРАЖЕ

Москва на такое не подписывается // Чем завершился сбор автографов в поддержку кандидатов в депутаты Мосгордумы

Менеджер модельных Показов для Платного участия в Дизайнерских Показах.

Дипломатический факультет Евразийского международного университета проводит дополнительный набор абитуриентов!


Губерниев обратился к Загитовой: надо прибавлять и держать удар

Уедет жить в Лондон: невеста Лепса поступила в заграничный колледж

Более 5 тысяч человек посетили культурно-просветительское мероприятие «Один день с Росгвардией» в Москве

Путин встретится с премьер-министром Индии Моди 8 и 9 июля


Дочь режиссера Егора Кончаловского показала свои картины

Еще 5 медиков в Мытищах получили сертификаты на жилье

С начала года подмосковные росгвардейцы более 32000 раз выезжали по сигналу «Тревога»

Более 500 выпускников МГСУ получили красные дипломы



Путин в России и мире






Персональные новости Russian.city
Тимати

В Перми откроется Stars Coffee от Тимати



News Every Day

Leeloo Dolls: Your Premier Destination for Realistic Sex Dolls




Friends of Today24

Музыкальные новости

Персональные новости