Intelligent machines do not really think ‘fast and slow’ as per dual-process models of human psychology, but this analogy can impact the decisions of various stakeholders. These stakeholders include engineers, user experience designers, regulators and ethicists, and the end users who interact with the machine. As a result of these overlapping uses of the Fast and Slow analogy, intelligent machines may display a complex pattern of behavior aimed at conveying or pretending that they are currently relying on their own version of Fast or Slow Thinking. Machines do not ‘think fast and slow’ in the sense that humans do in dual-process models of cognition. However, the people who create the machines may attempt to emulate or simulate these fast and slow modes of thinking, which will in turn affect the way end users relate to these machines. In this opinion article we consider the complex interplay in the way various stakeholders (engineers, user experience designers, regulators, ethicists, and end users) can be inspired, challenged, or misled by the analogy between the fast and slow thinking of humans and the Fast and Slow Thinking of machines.
artificial intelligence; machine behavior; dual-process; trust; algorithm aversion; machine ethics;
Trends in Cognitive Sciences, vol. 24, n. 12, October 2020, pp. 1019–1027