Article

Bad machines corrupt good morals

Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan

Abstract

Machines powered by Artificial Intelligence (AI) are now influencing the behavior of humans in ways that are both like and unlike the ways humans influence each other. In light of recent research showing that other humans can exert a strong corrupting influence on people’s ethical behavior, worry emerges about the corrupting power of AI agents. To estimate the empirical validity of these fears, we review the available evidence from behavioral science, human-computer interaction, and AI research. We propose that the main social roles through which both humans and machines can influence ethical behavior are (a) role model, (b) advisor, (c) partner, and (d) delegate. When AI agents become influencers (role models or advisors), their corrupting power may not exceed (yet) the corrupting power of humans. However, AI agents acting as enablers of unethical behavior (partners or delegates) have many characteristics that may let people reap unethical benefits while feeling good about themselves, indicating good reasons for worry. Based on these insights, we outline a research agenda that aims at providing more behavioral insights for better AI oversight.

Keywords

machine behavior; behavioral ethics; corruption; artificial intelligence;

Replaces

Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan, Bad machines corrupt good morals, TSE Working Paper, n. 21-1212, May 2021.

Reference

Nils Köbis, Jean-François Bonnefon, and Iyad Rahwan, Bad machines corrupt good morals, Nature Human Behaviour, vol. 5, n. 6, June 2021, pp. 679–685.

See also

Published in

Nature Human Behaviour, vol. 5, n. 6, June 2021, pp. 679–685