Should we burst our filter bubbles?

March 25, 2024 Digital

Disruptive political events such as Brexit, the Yellow Vest movement, and the Capitol riots have fueled concerns that social media is whipping up tensions by dividing us into filter bubbles and echo chambers in which we are only exposed to like-minded views. But policy responses are a leap in the dark while the emergence of political polarization remains poorly understood. In his latest study investigating the complex dynamics of online opinions, TSE-IAST computational sociologist Marijn Keijzer and his coauthors seek to provide empirical evidence to light the way ahead.

Why is it important to understand how people adjust their opinions?

Many observers have warned that opinion polarization is being driven by the tendency for social media platforms like Facebook, Twitter or YouTube to segregate users, exposing them almost exclusively to content that supports their own ideas. In my 2022 paper (‘The complex link between filter bubbles and opinion polarization’), we call this the “personalization-polarization hypothesis”. In this view, algorithmic filtering leads to a feedback-loop in which diminishing exposure to diverse and balanced information increases confidence in our own worldview, further limiting exposure to foreign opinions.

Various efforts have been made in response, aiming to pop filter bubbles. Bobble and OpinionSpace quantify the bias in users’ information diets and motivate them to search for alternative sources. Other initiatives, such as MyCountryTalks and Echo Chamber Club, match up individuals with opposing political orientations. Despite their good intentions, it is possible that these efforts are fostering rather than decreasing polarization. Contrary to the personalization-polarization hypothesis, there is good reason to believe that information that challenges your beliefs triggers “negative influence” that in fact increases cleavages. This influence would be prevented by strong web personalization. But empirical research on social influence is limited and has largely been conducted on offline face-to-face communication, which differs in important ways from computer-mediated interaction (see my 2018 paper, ‘Communication in online social networks fosters cultural isolation’).

 

This lack of empirical knowledge allows social-media companies to easily reject claims that their services affect opinion dynamics in undesirable ways. In 2018, former Twitter CEO Jack Dorsey said his firm needed to stop contributing to filter bubbles. In contrast, Meta CEO Mark Zuckerberg claimed “[...] some of the most obvious ideas, like showing people an article from the opposite perspective, actually deepen polarization by framing other perspectives as foreign”. Existing studies fail to provide unequivocal evidence for or against either of the arguments.

How does your new study address this lack of empirical evidence?

My 2022 paper explains how models of opinion dynamics can make very different predictions. This is because they are often based on markedly different assumptions about the social and cognitive processes underlying influence. For instance, influence can result from rational responses to others’ behavior or from the communication of persuasive arguments. Does influence decrease opinion differences between users? Do individuals reject opposite views and generate growing differences? Are opinions reinforced to become more extreme when users with similar opinions communicate?

We wanted to empirically test these competing assumptions. We first propose a general theoretical framework allowing us to represent to what extent exposure to online content encourages the adoption of similar ideas or pushes users toward antagonistic opinions. Rather than formally describing the intricate cognitive processes of influence, we boil it down to its consequences in terms of the change in a user’s opinion. This approach allows us to switch on and off the competing assumptions and generate competing predictions about whether personalization – defined as the extent to which users are shielded from foreign opinions – increases or decreases polarization.

To estimate our model parameters empirically, we conducted two online experiments with 471 residents in the Netherlands recruited on Facebook. Participants’ opinions on two topics – government spending on development aid, and government tax-deals with multinationals – were measured before and after exposure to arguments reflecting different ideological positions and moral foundations.

How can this research help us to tackle online polarization?

We show that well-intended attempts to pop the filter bubble may have counterproductive effects, and that these critically depend on the response functions that specify how individuals adjust their opinions. Our findings suggest that exposure to foreign opinions leads to assimilation towards those opinions, moderated by the extent of (perceived) ideological similarity to the source of influence. Relatively large effects were found for the persuasive power of moral foundations. We also find weak evidence in support of repulsion or distancing, but only for very large disagreement. Feeding estimated parameter values back into our model suggests that reducing personalization would reduce polarization.

One of our most surprising empirical observations was that the effect of exposure did not disappear after a week. If anything, it seems to have become stronger. While the lack of any hypotheses on long-term opinion change effects in this study prevents us from drawing strong conclusions, this unexpected finding mirrors results from Bail et al. (2018), who found that exposure over a longer period to tweets from supporters of the opposite party increased opinion distance.

This research highlights important future avenues for research on social media polarization. We demonstrate how to empirically test competing assumptions about opinion dynamics and study the effects of empirically observed patterns. We show how models of social influence can help address important societal challenges, such as whether filter bubbles need to be “burst”. Our paper also emphasizes that macro-predictions can be misleading if models are not sufficiently empirically calibrated. Our hope is that platform designers and computational social scientists will continue to collaborate in building resilient social media platforms that live up to the virtues of democratic deliberation.

 

FURTHER READINGPolarization on Social Media: Micro-Level Evidence and Macro-Level Implications’ and other publications by Marijn are available to view on the TSE and IAST websites.


Article published in TSE Reflect, March 2024