What lies beneath: Will truth-telling AI reshape society?

June 16, 2023 Finance

Imagine a world without liars. Artificial intelligence already detects lies better than the average person, and may soon outstrip the polygraph test and other specialist methods that require considerable time, effort and skill to achieve dubious levels of accuracy. Joining a team at the Max Planck Institute for Human Development, new research by TSE’s Jean-François Bonnefon highlights the potential for algorithmic technology to shred holes in the traditional social fabric that covers up dishonesty. His behavioral experiments provide crucial insights about how to manage our transition to a transparent new world in which lies are exposed and accusations flow more freely.

Why are we so reluctant to call someone a liar?

False accusations can be harmful to the accused because of the social stigma of being called a liar. At the same time, the accuser can be held accountable for unjustly tarnishing the accused’s reputation. Without reliable methods for detecting lies – recent studies indicate that people generally do not perform much better than chance – it may be a safer strategy to refrain from accusations that can hurt both parties. This may explain why people typically refrain from pointing the finger. Alternatively, we often avoid direct accusations, preferring euphemisms about being “economical with the truth” or Winston Churchill’s “terminological inexactitude”. 


How might AI shift the scales and encourage us to hold liars to account?

Lie-detection algorithms can already detect fake reviews and achieve higher-than-chance accuracy. If these technologies continue to improve and become massively available, they may have huge social implications. New technologies could automate the time-consuming process of fact-checking, reducing the harm of false accusations to the accused in a much wider range of contexts. However, the real game-changer may be automated lie detection that reduces the accountability of the accuser.

Imagine a world in which everyone has access to superhuman lie-detection technology, such as Internet browsers that screen social-media posts for lies; algorithms that check CVs for deception; or video-conferencing platforms that give real-time warnings that your interlocutor seems to be insincere. Delegating lie detection or accusations to the algorithm could reduce accusers’ accountability, increase their psychological distance from the accused, and blur questions of liability, resulting in higher accusation rates.

However, we know that people are often reluctant to use algorithms that are not 100% reliable, especially in highly emotive contexts. This resistance may deter widespread adoption of lie-detection technology. 

How did you investigate the disruptive potential of lie detectors?

Using Google’s open-source BERT language model, we trained an algorithm to be more accurate at detecting lies than humans. We then conducted an incentivized experiment in which we measured participants’ propensity to use the algorithm, as well as the impact of that use on accusation rates. We explored different versions of the future, testing people’s behavior in simulations with systematic fact-checking, access to the lie-detection algorithm, or a combination of both.

What are your key findings?

Accusation rates modestly increased in all simulations but our most striking result was that they climbed above 80% for people using an algorithm that flagged up a lie. People who chose to use the algorithm made more false accusations, but the probability of a lie remaining undetected was much lower in this group. At the same time, low uptake weakened the algorithm’s disruptive impact as only 30% of our participants elected to use the technology. 

What do your results tell us about how to manage this technology?

Lab experiments are not the best tool for estimating long-term cumulative social effects, but behavioral studies can help to anticipate changing social dynamics. Our research underlines the need to use lie-detection algorithms carefully, examining their limitations, benefits and costs before making them widely available.

Higher accusation rates may foster distrust and polarization. However, an advantage of lie-detection algorithms is that they can be properly tested and certified. By making accusations easier, especially if they are reasonably accurate, they may also discourage insincerity and promote truthfulness. Accuracy is crucial as individuals easily get false confidence in their ability to detect lies. Our finding that algorithmic uptake depends on the perceived accuracy of algorithms and, in particular, their false accusation rates, is an encouraging signal in this direction. This suggests that individuals may be mindful of the performance of the algorithms and use them somewhat responsibly.

Policymakers will also need to keep a close watch on the behavior of organizations. If it is more socially acceptable to accuse outsiders, lie-detection technology may initially be used only between organizations, such as in negotiations with suppliers or clients. However, this may pave the way for its use within an organization, such as in management of human resources.

 

FURTHER READING
‘Lie detection algorithms attract few users but vastly increase accusation rates’ is available as a preprint; for other publications by Jean-François Bonnefon, please visit his TSE webpage.


Article published in TSE Reflect, June 2023