December 12, 2024
By Marco Tulio Daza
Two decades ago, the internet was seen as a tool for advancing democracy and ending authoritarian regimes. The 2011 Arab Spring appeared to validate this optimism, as platforms like Twitter and Facebook played key roles in organizing protests and disseminating information. However, a decade later, most Arab nations experienced a failure of democratic movements, resulting in authoritarianism, civil wars, and instability. In 2023, Freedom House reported that global freedom had declined for the 18th consecutive year, with political rights and civil liberties worsening in 52 countries while only 21 improved. While this decline is driven by multiple factors, few anticipated that social media would contribute by spreading misinformation, increasing societal polarization, and enabling the manipulation of public opinion.
Historically, in liberal democracies, political inertia functioned as a centripetal force, drawing politicians toward the moderate center to appeal to a broader electorate, thereby fostering stability within democratic societies. Recent trends, however, indicate a reversal of this dynamic, with current inertia now propelling leaders and society toward ideological extremes (e.g., populism, ethnonationalism). This shift has facilitated the emergence of a new global wave of authoritarian leaders who ascend to power through democratic means but, upon assuming office, promptly erode democratic norms.
But how have AI and social media contributed to this global trend? There are three underlying causes of this phenomenon.
First, misaligned economic incentives have produced a market failure that has undermined independent journalism. A few dominant companies, led by Facebook and Google, control online advertising. They subsidize users to increase their base, leverage network effects, and generate revenue from advertising fees. This creates a self-reinforcing loop, forming competitive moats and driving 'winner-takes-all' dynamics.
As a result, independent news organizations have seen their revenue streams collapse, threatening their survival and reducing access to trustworthy journalism. By 2020, newspaper circulation in the US had declined by 61% from its peak of 62 million in 1987. prominent outlets, most media organizations have become increasingly susceptible to government interference and political manipulation. While this issue is long-standing in authoritarian regimes like Russia, Cuba, and China, global press freedom has been declining overall. Alarmingly, this trend has accelerated in countries where such manipulation was once rare, including Bolivia, Ecuador, Mexico, Poland, and Hong Kong.
Moreover, digital platforms compete directly with traditional media in content distribution by relying on third-party content. This competition is shifting economic incentives from creating trustworthy content to merely distributing it. As a result, platforms capture most of the revenue, passing only a fraction to content creators, many of whom produce low-quality material. Without reputations or even identity to protect, these creators are especially vulnerable to external pressures. This practice erodes accountability for content—a standard once upholded by legacy media. Consequently, a race to the bottom emerges, promoting content that is often low quality, biased, or manipulative.
This concentration of revenue and influence constitutes a market failure within the media industry. Reduced competitiveness, increased entry barriers, and the financial strain placed on existing competitors undermine the media's role as an independent watchdog—essential to the functioning of democracy.
Second, the absence of ethical leadership in tech companies has enabled the exploitation of cognitive biases through AI, allowing harmful content and polarizing algorithms to flourish unchecked. When scaled up, this exploitation targets one of democracy's most profound vulnerabilities: the psychological tendencies that compromise rational decision-making. As Tversky and Kahneman highlight in their seminal work “Judgment Under Uncertainty: Heuristics and Biases,” cognitive biases drive individuals to make irrational choices, making them susceptible to manipulation.
AI recommendation systems amplify this vulnerability by inferring personality traits and other sensitive information from online behavior, such as consumption needs, political preferences, and even sexual orientation. These detailed profiles are then used to exploit cognitive biases, enabling targeted manipulation and undermining individual autonomy and dignity.
Three fundamental biases are instrumental in enabling these algorithms to succeed. Present bias, or myopic preference, is the tendency to favor immediate gratification (e.g., likes, shares, comments) over long-term concerns like safety, information reliability, and a deeper understanding of issues.
Similarly, confirmation bias drives users to prefer information that aligns with their existing beliefs or values. Algorithms curate content based on users' preferences, resulting in echo chambers, where users primarily interact with like-minded individuals, and filter bubbles, where users are exposed to information that reinforces their beliefs.
Negativity bias—the tendency to prioritize negative experiences or information over positive ones—significantly influences media consumption. Research shows that fake news on Twitter not only reaches more users but also spreads faster than real news. Social media algorithms further amplify this effect by prioritizing emotionally charged content to boost user engagement.
These biases fuel societal polarization by limiting exposure to diverse perspectives and amplifying extreme viewpoints. The resulting lack of open dialogue fragments the public sphere, making consensus increasingly difficult to achieve and further intensifying political radicalization. Even if unintentionally, this environment creates fertile ground for manipulating public opinion through the spread of propaganda and disinformation.
Finally, these technologies are deliberately weaponized by unscrupulous actors to consolidate power. This calculated strategy polarizes societies, erodes journalistic integrity, weakens social trust, and undermines public confidence in liberal democratic systems, further deepening the global democratic crisis.
Throughout history, malicious actors have exploited psychological vulnerabilities to gain power. Roman emperors used "bread and circuses" to distract the public. Similarly, the Nazi regime employed fear, nationalism, and antisemitism to manipulate opinion, even leveraging IBM's Hollerith machines to catalog Jewish families for deportation.
Today, AI and digital media amplify such tactics on an unprecedented scale. Social media has lowered the cost and complexity of spreading misinformation, enabling governments and political parties in some countries to covertly pay influencers and deploy bot networks to disseminate propaganda. Systematic efforts to distort political narratives and spread disinformation have been documented in the US, the UK, Ukraine, and Southeast Asia, among many others.
Social media companies and data holders can exploit user information to push personal political agendas, as seen in the Cambridge Analytica scandal and, more recently, with reported actions on X (formerly Twitter) under Elon Musk's ownership. Unauthorized data access adds another layer of concern, with governments using spyware on messaging platforms for mass surveillance and targeting dissidents. Additionally, advanced AI models enable the rapid and inexpensive production of convincing propaganda, further amplifying manipulation efforts.
Autocrats and demagogues rise to power by disregarding legal and ethical boundaries, exploiting misinformation, and manipulating emotions to bypass rational discourse and erode democratic principles. Social media and AI have become instrumental in distorting public opinion and undermining trust.
To address these challenges, a multifaceted approach involving the participation of the state, social media companies, and society is essential.
First, addressing the undermining of independent journalism by merely altering economic incentives by regulating big tech companies is a complex endeavor. These companies frequently circumvent regulations and exploit legal loopholes. Moreover, regulations often produce unintended and counterproductive consequences.
A more promising strategy is to support free, independent journalism by treating it as a public good, similar to public health or education. This can be accomplished through public funding or by promoting decentralized platforms. Successful independent publicly funded media models exist, such as Germany's Deutsche Welle, the United States' ProPublica, and the Norwegian Broadcasting Corporation. Limiting state influence over the media while ensuring financial sustainability can further strengthen these efforts, preserving the media's role as an independent democratic watchdog.
Second, to address social media AI-driven polarization, companies should adopt Human-Centered AI (HCAI) principles, prioritizing designing AI systems that enhance human capabilities while ensuring transparency and user control. Key measures could include advanced content curation options, like toggling between AI-curated and neutral feeds, cross-platform data portability (e.g., Mastodon, Threads), and robust privacy settings. Additional steps include community-driven moderation, crowd-sourced fact-checking, independent AI audits, and tools like usage alerts, "cool-down" periods, and nudges to encourage breaks or shifts in focus. While this approach may impact social media companies' revenue, it is a necessary trade-off to transform these platforms into safe, trustworthy spaces that foster meaningful connections instead of driving polarization and division.
Third, a natural response to the proliferation of harmful online content is strengthening legal frameworks to hold individuals and platforms accountable for the harm caused by their actions (e.g., publications, algorithms). However, establishing accountability for digital content is challenging due to the difficulty of tracing its origins. Furthermore, because this is a global issue, regulatory approaches often have limited reach, and some actors simply do not care about compliance.
A more effective approach combines regulation with AI and media literacy to empower users to discern manipulative content from reliable information. A compelling example is Taiwan's proactive response to China's disinformation campaigns. The Digital Affairs Minister promotes responsible digital engagement by using pre-bunking strategies to counter false narratives before they spread, and by launching real-time fact-checking efforts, the Digital Affairs Minister promotes responsible digital engagement. Educating the public on how AI shapes information streams helps individuals resist biases and identify misinformation, prioritizing informed and ethical digital engagement.
As social beings, humans thrive within political communities whose fundamental purpose is to foster the common good. Paradoxically, social media platforms designed to foster connections often have the opposite effect, driving division. However, AI and social media are not inherently harmful to democracy; their impact depends on how we design and use these technologies. By treating independent journalism as a public good and adopting a sociotechnical approach rooted in Human-Centered AI (HCAI) and AI and media literacy, we can mitigate the harmful effects of these tools. This strategy aims to preserve and enhance the democratic values that bind us together.
Marco Tulio Daza is a Professor at the University of Guadalajara and an associate member of the Institute for Data Science and Artificial Intelligence (DATAI) at the University of Navarra.