Do you trust recommendations?

March 02, 2018 Digital economy

A specialist in economic theory, Johannes Hörner joined TSE in September 2016 from Yale through the AXA chair, attracted by the Toulouse lifestyle, the TSE project and its unique environment. We asked him about his forthcoming article in the Quarterly Journal of Economics, written with Yeon-Koo Che on the optimal design for social learning, or how to implement systems that are better for everyone collectively, even when they put more strain on some individuals.

What was the idea behind this article?

I met Yeon-Koo Che (Columbia University) at Yale in 2014 when he came visiting. He is a specialist in recommendation systems and I have spent a lot of time working on experimentation. From our discussions rose the idea that it could be interesting to have a look at the social learning aspects of recommendation systems, especially for internet platforms.

We know that recommendation systems, such as Google, Pandora or Waze, need a lot of information to improve their results. They need to know, from user feedback, that a website isn’t pertinent or about the quality of a recent music album. This need for information incites them to skew their recommendations for some users so they can learn what they need to know and in return propose overall better recommendations, and we wanted to understand how these two different optimums, the overall optimum and the individual optimum, balanced each other.

What are possible strategies for recommendation companies?

It depends a lot on the market they’re into, and on their recommendations. An example of how to circumvent this issue is, when possible, to ask experts to weigh in on the recommendations that should be issued. For instance, Pandora asks musicologists to dissect music and assess numerous characteristics, such as rhythm, instruments, etc. This data allows the radio website to give better recommendations on newly released music without having to skew some users’ playlists to try to get their feedback. Michelin, for example, gives out restaurant recommendations based only on their experts’ taste and opinion.

But in many cases, experts can’t be used, and the ideal strategy then is to send skewed recommendations to a few users when the company needs to know something. For example, Waze has an interest in sending an individual driver to a road that could be blocked or slow if the service doesn’t have any information on its status. This driver’s GPS will then let Waze know whether the road is slowed and then allow the service to update its recommendations for all the users.

What are the issues with such a strategy?

There are two issues with such a strategy. First, it means that some users will sometimes receive terrible recommendations so that other users can benefit from their information. This is a classic case when firms should try to subsidize individuals for these types of recommendations.

The second issue is that users might understand and recognize a skewed recommendation. In the Waze example, the individual could know or estimate that the road Waze recommends isn’t the best one and decide not to follow the service’s directions. This makes users less confident about the service. Even worse, users can understand that the service is sometimes using them as guinea pigs, and then assess, for each recommendation, whether it’s a real or test one.

So we tried to understand what happens when Pandora or Waze know that users shouldn’t realize that they are being used as guinea pigs, because if the information they deliver is too obviously a test, it will make users very wary of their services. We tried to find out how far companies can go before hurting consumers’ trust.

What are the main results from your article?

One of the main conclusion is that companies should consider incentives more seriously when they plan their recommendation strategy. Another key insight is that their capacity to use their users to receive information depends on their credibility. The more credible they are perceived (and, usually, then, the more pertinent are their recommendations), the more tolerant users will be with skewed results. Finally, we show that their credibility is proportional to their consumers’ knowledge of being tested, and of their capacity to get information without using consumers’ feedback. Something that would be very interesting to analyze in the future, is how those platforms’ recommendations can sometimes affect themselves, such as when Waze tells all its users to take a road and thus causes new traffic pressures.