April 2, 2019, 12:45–13:45
IAST Lunch Seminar
Cooperation among selfish agents can be promoted by allowing agents to condition behaviour on reputation. Social norms – dictating how agents update the reputations of others – are central in determining whether this mechanism is effective. In particular, norms that reward justified defection have been shown to promote cooperation. A major limitation of existing models is that they assume all agents adopt a uniform norm, exogenous to the model. Here we show that when agents can spontaneously adopt novel norms, a learning process will see them drift towards socially undesirable outcomes. We present a model where agents can choose both how to react to reputations and how to assign the reputations of others – making social norms emergent. In this scenario cooperation can only be achieved when the space of norms is severely restricted. Cooperation based on reputation mechanisms is often criticised due to the costly nature of assigning reputations, or the ability of agents to easily whitewash their reputations. Our result suggests that even if these issues are overcome, enabling cooperation via reputation is likely to require additional mechanisms. I discuss how these results relate to similar results in direct reciprocity, and what modern models for the evolution of human cooperation may look like, aiming to falsify competing hypothesis and embracing stylised facts of human sociality.