May 9, 2023, 11:30–12:30
Room Auditorium 4 (First floor - TSE Building)
IAST General Seminar
Over the past decade, governments and organizations around the world have established behavioral insights teams advocating for randomized experiments. However, recent findings by M. N. Meyer et al., Proc. Natl. Acad. Sci. U.S.A. 116, 10723–10728 (2019) and P. R. Heck, C. F. Chabris, D. J. Watts, M. N. Meyer, Proc. Natl. Acad. Sci. U.S.A. 117, 18948–18950 (2020) suggest that people often rate randomized experiments as less appropriate than the policies they contain even when approving the implementation of either policy untested and when none of the individual policies is clearly superior. The authors warn that this could cause policymakers to avoid running large-scale field experiments or being transparent about running them and might contribute to an adverse heterogeneity bias in terms of who is participating in experiments. In one direct and six conceptual preregistered replications (total N = 5,200) of the previously published larger-effect studies, using the same main dependent variable but with variations in scenario wordings, recruitment platforms, and countries, and the addition of further measures to assess people’s views, we test the generalizability and robustness of these findings. Together, we find that the original results do not appear to generalize. That is, our triangulation reveals insufficient evidence to conclude that people exhibit a common pattern of behavior that would be consistent with relative experiment aversion, thereby supporting recent findings by R. Mislavsky, B. Dietvorst, U. Simonsohn, Mark. Sci. 39, 1092–1104 (2020). Thus, policymakers may not need to be concerned about employing evidence-based practices more so than about universally implementing policies.