researchanalysis

2. Algorithmic Radicalization

AI Research60 min read

Media Ecosystems, Algorithmic Radicalization, and Ideological Incoherence

Summary

This research compilation examines how profit-driven media systems—both social platforms and cable news—systematically drive ideological extremism and incoherence. The core finding across all sources: engagement-based business models reward outrage, and human psychology proves exploitable at scale.

Key Mechanisms

  • Outrage optimization: Algorithms amplify anger and moral-emotional content because it generates engagement (out-group language increases sharing by 67%; moral-emotional words increase diffusion by 20%)
  • Identity-protective cognition: People selectively dismiss evidence threatening their group identity—and higher cognitive ability enables more sophisticated rationalization, not less
  • Negative partisanship: Political identity increasingly defined by hatred of opponents rather than positive values, leading to principle abandonment when tribal victory demands it
  • Audience capture: Content creators become prisoners of their most extreme followers' expectations

Major Findings

  • Filter bubbles are smaller than commonly believed—user choice drives homogeneous information diets more than algorithms
  • Radicalization pipelines have limited effects on typical users but significant effects on vulnerable individuals actively seeking extreme content
  • Cable news has larger polarization effects than social media (up to 23% of Americans polarized via TV)
  • The collapse of local news (2,900+ newspapers closed since 2004) has nationalized political conflict

Case Study

The Alex Pretti incident (January 2026) serves as a case study of ideological incoherence: self-identified libertarians and gun-rights advocates justified federal agents killing a legal gun owner because negative partisanship overrode principled commitment to Second Amendment rights and limited government.

Research Sources

This document compiles analysis from ChatGPT, Gemini, and Claude, each synthesizing peer-reviewed research from social psychology, network science, and political theory.


ChatGPT

Media Ecosystems, Algorithmic Radicalization, and Ideological Incoherence

The core mechanism: monetized attention creates self-reinforcing belief drift

Modern profit-driven media systems—especially advertising-supported social platforms—are best understood as feedback-driven attention markets: the scarce resource is human attention, and both platforms and publishers optimize to capture and retain it. entity["people","Herbert A. Simon","economist, attention theory"] famously framed this as “a poverty of attention” created by informational abundance, a logic that has become more structural (and automated) as feeds and recommenders intermediate more of what people see. [*1]*

In the “attention merchant” view advanced by entity["people","Tim Wu","legal scholar, attention economy"], media businesses repeatedly converge on the same competitive strategy: design content and distribution to seize attention at scale, then monetize it—historically through advertising and, increasingly, through data-driven targeting and prediction. [2]* This logic is closely aligned with entity["people","Shoshana Zuboff","Harvard scholar, surveillance cap"]’s account of surveillance capitalism, which emphasizes how behavioral data and prediction products are monetized in “behavioral futures markets.” [3]*

A key implication is that “harm” and “engagement” can be complements rather than tradeoffs. Formal economic modeling shows why: in ad-funded environments, platforms can profit from content that users themselves experience as harmful if that content still increases time-on-platform and engagement (especially when network effects reinforce market power). [4]* In short: the system does not need ideological coherence to be “healthy” for the business; it needs compulsion, repetition, and retention. [5]*

image_group{"layout":"carousel","aspect_ratio":"16:9","query":["attention economy feedback loop diagram social media","algorithmic amplification outrage engagement diagram","echo chamber network visualization social media"],"num_per_query":1}

At the individual level, this produces a recognizable drift pattern: small behavioral choices (clicks, watches, likes) train recommender systems; recommender systems reshape what the user experiences as “normal” and salient; creators and outlets observe what performs and adapt; and groups reward conformity and punish deviation. Over time, these loops can shift a person’s issue positions—sometimes radically—without ever asking them to update their underlying moral self-concept, producing ideological incoherence (“I’m the same principled person; the world got worse”). [*6]*

Outrage optimization: why anger outcompetes nuance

The “outrage spreads faster” claim is not merely intuition; it is increasingly well-supported through multiple mechanisms that stack together.

First, virality is correlated with high-arousal emotions. Work associated with entity["people","Jonah Berger","Wharton marketing professor"] and entity["people","Katherine Milkman","Wharton behavioral scientist"] argues that arousal (whether positive like awe or negative like anger/anxiety) predicts sharing more reliably than simple positive-vs-negative valence. [*7]*

Second, moral-emotional language functions as an accelerant. entity["people","William J. Brady","social psychologist"] and collaborators, analyzing large-scale social data, find that moral-emotional content increases diffusion (e.g., retweet rates) in politicized discourse—what they term moral contagion. [*8]*

Third, platforms don’t just host outrage; they can train it. In a preregistered mixed-methods program (observational studies and experiments), Brady and colleagues show that positive feedback (likes/shares) for moral outrage increases the likelihood of future outrage expression, consistent with reinforcement learning dynamics. [9]* This is a direct mechanism by which “engagement metrics” become “behavior-shaping incentives,” pushing discourse toward the emotional register that gets rewarded. [10]*

Fourth, out-group hostility is engagement-gold. entity["people","Steve Rathje","psychology researcher"] and colleagues find that posts expressing out-group animosity can generate disproportionately high engagement on social platforms. [11]* This helps explain why negative partisanship (“we are united by what we oppose”) maps so cleanly onto platform incentives. [12]*

Fifth, “emotional contagion” provides a population-level transmission channel. Experimental evidence from large-scale social network contexts suggests that exposure to emotionally valenced content can influence users’ own posting behavior, implying that emotion can propagate even without explicit persuasion. [*13]*

Finally, the design choice to rank content by predicted engagement (rather than, say, chronology) can itself shift what users see. Research on algorithmic ranking on entity["company","X","social media platform"] indicates that engagement-based ranking can amplify emotionally charged, out-group-hostile content—and that such ranking can underperform users’ stated preferences, suggesting a divergence between “what keeps you engaged” and “what you say you want.” [*14]*

Internal research leaks and investigative reporting suggest that these dynamics are not hypothetical for major platforms. Reporting tied to the “Facebook Files” describes how ranking changes intended to promote “meaningful social interactions” could reward outrage and sensationalism, intensifying incentives for publishers and political actors to produce outrage-bait. [*15]*

Bubbles and echo chambers: what the evidence says and where it hides

The “filter bubble” idea—popularized by entity["people","Eli Pariser","internet activist, filter bubble"]—claims personalization can trap users in tailored informational worlds, limiting exposure to cross-cutting perspectives. [16]* But the empirical literature is more nuanced: many users have some cross-cutting exposure, while a smaller subset experiences stronger isolation or ideological narrowing. [17]*

A useful conceptual clarification comes from entity["people","C. Thi Nguyen","philosopher, echo chambers"]: epistemic bubbles are structures where contrary voices are missing (often by omission), while echo chambers actively discredit outside sources, making communities resilient to correction. [18]* This distinction matters because “show people more diverse content” is more likely to puncture bubbles than to dismantle echo chambers, where distrust is part of the structure. [19]*

Empirically, large-scale studies suggest both user choice and algorithmic pathways matter. For example, on Facebook-like networks, ideological homophily in friend networks shapes potential exposure, and algorithmic ranking can further reduce cross-cutting content—though the magnitude varies by users’ networks and behaviors. [20]* Across broader online behaviors, social media and search can increase exposure to ideologically aligned sources, but many users still consume relatively “mainstream” diets, and only some show strong segregation. [21]*

This tension is reflected in the “overstated vs. subtle” debate. Research by entity["people","Elizabeth Dubois","communication scholar"] and entity["people","Grant Blank","Oxford Internet Institute researcher"] argues echo chamber fears can be overstated in broad populations because many people have diverse media diets and political interest moderates selective exposure. [22]* In parallel, critiques (e.g., by Axel Bruns and related scholarship) argue that strong deterministic “bubble” stories can be empirically thin and that the real dynamics may be partial narrowing, asymmetric attention, and strategic amplification, rather than total isolation. [23]*

Where “epistemic closure” enters: even without total informational isolation, communities can become functionally impervious to correction if they treat out-group sources as illegitimate. The modern popular usage of “epistemic closure” in the media context is often traced to entity["people","Julian Sanchez","libertarian writer"], who used it to describe a movement ecosystem that increasingly trusts only in-group information. [24]* This concept is especially relevant to ideological incoherence: if your informational trust is group-bound, then “coherence” becomes coherence with the group, not coherence with prior principles. [25]*

Radicalization and ideological narrowing through recommendations and influencers

The most discussed “pipeline” story is the entity["company","YouTube","video platform"] “rabbit hole,” but research increasingly suggests a mixed picture: recommendation systems can facilitate pathways to more extreme content for some users, while for many others the effect is modest or mediated by prior preferences and external networks. [*26]*

One influential large-scale audit (Ribeiro et al.) finds evidence of user migration from “milder” right-wing content ecosystems toward more extreme ones over time, and tests the reachability of content types via recommendations. [27]* At the same time, other research argues that simple “algorithm radicalizes the unsuspecting masses” narratives are incomplete: creator supply, audience demand, and community dynamics can be central drivers. [28]* A careful “real users” analysis from NYU’s CSMaP concludes that YouTube recommendations do not lead the vast majority of users into extremist rabbit holes, but can push users toward narrower ideological ranges—an important distinction between “hard radicalization” and “soft narrowing.” [*29]*

A complementary angle focuses on influencer networks as gateways. entity["people","Rebecca Lewis","researcher, Data & Society"]’s “Alternative Influence Network” work argues that cross-promotion, collaborations, and brand-influencer tactics can move audiences across adjacent ideological niches, normalizing increasingly radical frames even when the endpoint ideology is not consistently articulated. [*30]*

Platform differences matter because they shape how adjacent content is discovered:

On entity["company","TikTok","short video app"], algorithmic discovery is central (the “For You” feed), and audits during the 2024 U.S. election season report measurable partisan asymmetries in recommended political content under experimental “sock puppet” designs. [31]* Research also finds that “toxicity” in political content can attract engagement in TikTok’s recommendation-driven environment, reinforcing the broader claim that engagement incentives reward hostile style even if platforms’ ideological direction varies by context. [32]*

On X, multiple audits show that algorithmic curation can amplify some political actors and sources more than others. A large-scale experiment reported in PNAS finds that, across several countries, mainstream political right content often receives higher algorithmic amplification than mainstream left content on Twitter’s ranked timeline, while not necessarily amplifying extremes more than moderates. [33]* A separate audit of Twitter’s “Who to Follow” recommender suggests that following algorithmic recommendations can produce network structures resembling echo chambers, though the effect on exposure to false election narratives can vary with how networks are built (algorithmic recommendation vs socially-endorsed following). [34]*

On Facebook-like ecosystems, leaked internal research and investigative reporting suggest that ranking systems optimizing “meaningful interactions” can inadvertently reward outrage and reshares (especially where “downstream” engagement is valued), intensifying the publisher and political incentives to produce sensational content. [*35]*

Taken together, the strongest “pipeline” claim supported across platforms is often not a single deterministic route, but an ecosystem-level process: recommender systems reduce search costs for adjacent content, influencer networks create bridges between niches, and engagement incentives reward escalation. [*36]*

Psychology of gradual radicalization and ideological incoherence

A key reason principled people can end up in contradictory positions is that their belief-updating is often less about “evidence → conclusion” than about “identity → credibility → conclusion.”

Work by entity["people","Dan M. Kahan","legal scholar, cultural cognition"] on identity-protective cognition argues that people selectively credit or dismiss evidence in ways that protect their standing in valued groups, making misinformation and misperception conditional on identity dynamics rather than (only) information deficits. [37]* This is a plausible psychological pathway from “I evaluate claims” to “I evaluate which side is saying it,” even among individuals who experience themselves as rational. [38]*

Moral psychology provides a complementary lens. entity["people","Jonathan Haidt","social psychologist"] argues that moral reasoning often follows intuitive social judgments and that humans are partially “groupish,” meaning moral frameworks can bind people into teams and “blind” them to counterevidence when identity is threatened. [39]* When platforms convert moral signaling into engagement rewards, the “groupish” tendency becomes instrumentally reinforced. [40]*

Negative partisanship makes incoherence even more likely. In American politics research, negative partisanship describes cases where animus toward the opposing party is a stronger motivator than affection for one’s own side, and it has been linked to broader “nationalization” of politics (aligned voting across offices and levels). [41]* In an engagement environment where out-group hostility is rewarded, negative partisanship becomes not just psychologically plausible but economically legible as content strategy. [42]*

Gradual radicalization is also supported by classic commitment dynamics. The “foot-in-the-door” technique (initial small commitments increasing compliance with larger ones) is experimentally supported in the foundational Freedman & Fraser work. [43]* Once a person publicly defends a stance, cognitive dissonance theory predicts psychological pressure to reduce inconsistency—often by adjusting beliefs to match prior commitments rather than reversing course. [44]* The sunk cost effect and escalation of commitment show how prior investments (time, identity, reputation) can rationally feel like reasons to persist even when the original choice is no longer defensible. [*45]*

These mechanisms map closely onto observed platform dynamics: “likes and shares” provide reinforcement for outrage expression; network norms shape what seems acceptable; and creators may escalate to satisfy audience demand (“audience capture”), producing radicalization of both influencer and followers. [*46]*

Libertarian rhetoric meets state power in the Alex Pretti discourse

A concrete, time-specific example of ideological incoherence—especially relevant to “don’t tread on me” identity—emerged around the killing of entity["people","Alex Pretti","US nurse killed 2026"] during federal immigration enforcement actions in Minneapolis in late January 2026. Reuters and other outlets report a backlash after entity["politician","Donald Trump","US president 2026"] suggested Pretti “should not have carried” a gun, despite reporting that he was a licensed concealed-carry holder; major gun-rights groups publicly criticized that framing. [*47]*

What makes this episode analytically valuable is not any single individual’s stance but the pattern of argument shifts that appeared in public commentary: some defenses of the killing leaned on “he was armed” as a justification for lethal force, a logic that can imply de facto restrictions on armed protest and a widened tolerance of state violence—positions that clash with libertarian or small-government self-conceptions. [48]* The visible intra-right conflict (e.g., between some conservative-aligned figures and gun-rights organizations) illustrates how negative partisanship and “enemy-of-my-enemy” coalition logic can pressure ideological communities into endorsing state power when it targets a despised out-group (in this case, framed around immigration enforcement). [49]*

At the same time, not all self-identified libertarian organizations moved in that direction: state-level libertarian party statements demanded accountability and criticized federal overreach, underscoring that the “pipeline” is probabilistic, not deterministic. [50]* This divergence is consistent with the broader research picture: information environments create strong selection pressures toward outrage-aligned and identity-aligned narratives, but individuals and subcommunities vary in susceptibility depending on identity commitments, media diets, and trust structures. [51]*

Mainstream media, historical parallels, and cross-platform amplification

Social platforms are not the whole story. “Chosen” media environments—especially cable opinion programming—offer a pre-algorithmic model of outrage as product. entity["people","Jeffrey M. Berry","political scientist"] and entity["people","Sarah Sobieraj","sociologist"] document outrage as a recognizable genre across talk radio, blogs, and cable news, arguing that outrage rhetoric is commercially rewarded and can distort deliberation by shifting politics toward theatrical conflict. [*52]*

In the hybrid media system, social media and legacy outlets can form mutual amplification loops. The “trading up the chain” concept describes how narratives can move from niche or fringe spaces into mainstream coverage (sometimes to debunk, sometimes inadvertently to amplify), expanding the audience regardless of original accuracy. [53]* entity["people","Renée DiResta","disinformation researcher"] has emphasized this dynamic in contemporary propaganda environments, where influencers, algorithms, and mainstream media interactions can launder or elevate claims through repeated attention. [54]*

Structural changes in journalism intensify these loops. The collapse of local news reduces routine accountability reporting and increases reliance on nationalized, identity-coded narratives. Pew documents changed relationships with local news, while recent work on “news deserts” describes civic consequences as local reporting capacity erodes. [55]* Political science research similarly argues that elections and political attitudes have become more nationalized, aligning local and congressional outcomes more tightly to national partisan conflict—conditions under which “team identity” can dominate issue consistency. [56]*

Mainstream media can also unintentionally distort by pursuing “false balance” or “bothsidesism,” treating unequally supported claims as symmetrical debates, especially in science-related issues. Research on false balance argues it can weaken audiences’ capacity to distinguish evidence from fringe assertions by manufacturing perceived controversy. [*57]*

Historical and comparative research suggests the underlying pattern—mass communication + political conflict + profit or power incentives—recurs even when technologies change. A prominent economic history study finds that radio propaganda contributed to Nazi political outcomes and later incited antisemitic acts, with effects mediated by predispositions (propaganda works differently where attitudes are already primed). [58]* That pattern fits the modern evidence that algorithmic systems often amplify content that resonates with prior resentments and group identities rather than converting neutral mass audiences. [59]*

Comparative cases underline platform-specific pathways:

· In entity["country","Myanmar","country in southeast asia"], UN reporting and later investigations argue that Facebook played a major role in amplifying hate and incitement dynamics in the Rohingya crisis; Amnesty likewise argues that Meta’s systems and incentives contributed substantially to harms. [*60]*

· In entity["country","India","country in south asia"], research and reporting on entity["company","WhatsApp","encrypted messaging app"] have documented how rapid rumor transmission in encrypted networks is linked (in multiple accounts) to mob violence episodes, driving policy and product interventions such as forwarding limits. [*61]*

· In entity["country","Brazil","country in south america"], studies of WhatsApp political communication and misinformation have described asymmetric activity patterns and targeted messaging ecosystems in electoral contexts, consistent with the view that closed-group networks can intensify ideological sorting without public visibility. [*62]*

These cases reinforce an important synthesis: what is “new” is not that mass media can radicalize, but that high-frequency personalization, engagement optimization, and creator monetization can make radicalization faster, more iterative, and more feedback-sensitive. [*63]*

Resistance, interventions, and the open causal debate

Research suggests there is no single “fix,” because the problem is joint: platform design + business model + human social psychology. [*64]* Still, several intervention classes have stronger evidentiary grounding than others.

Media literacy and “accuracy” interventions show measurable effects in experimental research. A PNAS study reports that scalable digital media literacy interventions can increase discernment between true and false content (at least in the short to medium term), while meta-analytic work and inoculation-style approaches suggest resilience can be improved without obvious “backfire” effects in some contexts. [65]* These approaches align with the idea that “bubbles” are sometimes porous—people can learn to evaluate claims better even amid polarized environments. [66]*

Design and governance interventions increasingly focus on recommender transparency and user choice. Under the EU’s entity["organization","European Commission","executive body of the EU"] enforcement of the Digital Services Act regime, platforms face stronger obligations around recommender transparency and options not based on profiling for very large platforms, reflecting a policy theory that default settings and friction shape exposure at scale. [67]* The premise is straightforward: if engagement-ranked feeds amplify divisive content, then giving users genuine, non-nudged alternatives (e.g., chronological or non-profiled recommenders) can reduce amplification pressure—even if it does not eliminate polarization. [68]*

Creator-economy incentives are another leverage point, though evidence is still emerging. Research on the consolidating creator economy documents how creators diversify monetization and build cross-platform presence, reinforcing the “entrepreneurial” logic of attention competition. [69]* When combined with evidence that outrage and out-group hostility drive engagement, the financial reward structure can select for the loudest or most polarizing communicators—without requiring genuine ideological commitment. [70]*

The hardest question is causality: are platforms causing polarization and incoherence, or mostly sorting and intensifying tendencies that originate elsewhere? The research record supports multiple truths at once:

· Some evidence challenges simple “internet causes polarization” stories. For example, demographic analyses find that polarization increased most among groups least likely to use the internet and social media, suggesting the internet cannot be the sole driver. [*71]*

· Yet controlled experiments can find platform-mediated effects in specific settings: exposure to opposing views on Twitter can increase polarization for some groups, contradicting simplistic “more cross-cutting exposure fixes it” assumptions. [*72]*

· Platform design clearly affects what content is amplified (e.g., ranked vs chronological feeds), even if the downstream attitude impacts vary by population, context, and measurement window. [*73]*

A realistic synthesis is that modern media ecosystems can make ideological coherence unusually hard by rewarding identity-consistent and emotionally activating cognition. The system does not merely “misinform”; it can reshape what people feel compelled to say, what they think others believe, and which moral emotions are socially rewarded—creating a path from principled commitments to contradictory stances that still feel like loyalty, courage, or realism. [*74]*

Under that synthesis, the best-supported interventions are those that change incentives (ranking and monetization), improve user agency (real feed choices, transparency), and strengthen epistemic resilience (media literacy and social trust repair), rather than relying on a single lever like “more moderation” or “more exposure.” [*75]*

[*1]* Designing Organizations for an Information-Rich World

https://atelierdesfuturs.org/wp-content/uploads/2025/07/1971-simon.pdf?utm_source=chatgpt.com

[*2]* "The Attention Merchants: The Epic Scramble to Get Inside ...

https://scholarship.law.columbia.edu/books/64/?utm_source=chatgpt.com

[*3]* The Age of Surveillance Capitalism: The Fight for a Human ...

https://www.hbs.edu/faculty/Pages/item.aspx?num=56791&utm_source=chatgpt.com

[4]* [5]* [*64]* A Model of Harmful Yet Engaging Content on Social Media

https://www.aeaweb.org/articles?id=10.1257%2Fpandp.20241004&utm_source=chatgpt.com

[6]* [9]* [10]* [40]* [46]* [63]* [*74]* How social learning amplifies moral outrage expression in ...

https://pmc.ncbi.nlm.nih.gov/articles/PMC8363141/?utm_source=chatgpt.com

[*7]* Emotion and Virality: What Makes Online Content Go Viral?

https://ideas.repec.org/a/vrs/gfkmir/v5y2013i1p18-23n1004.html?utm_source=chatgpt.com

[*8]* Emotion shapes the diffusion of moralized content in social ...

https://www.pnas.org/doi/10.1073/pnas.1618923114?utm_source=chatgpt.com

[11]* [12]* [42]* [70]* Out-group animosity drives engagement on social media

https://www.pnas.org/doi/10.1073/pnas.2024292118?utm_source=chatgpt.com

[*13]* Correction for Kramer et al., Experimental evidence of ...

https://www.pnas.org/doi/10.1073/pnas.1412583111?utm_source=chatgpt.com

[14]* [68]* Engagement, user satisfaction, and the amplification of ...

https://pmc.ncbi.nlm.nih.gov/articles/PMC11894805/?utm_source=chatgpt.com

[15]* [35]* From Instagram's Toll on Teens to Unmoderated 'Elite' Users, Here's a Break Down of the Wall Street Journal's Facebook Revelations

https://time.com/6097704/facebook-instagram-wall-street-journal/?utm_source=chatgpt.com

[*16]* The Filter Bubble by Eli Pariser

https://www.penguinrandomhouse.com/books/309214/the-filter-bubble-by-eli-pariser/?utm_source=chatgpt.com

[*17]* Echo Chambers, Filter Bubbles, and Polarisation

https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2022-01/Echo_Chambers_Filter_Bubbles_and_Polarisation_A_Literature_Review.pdf?utm_source=chatgpt.com

[18]* [19]* C. Thi Nguyen, Echo chambers and epistemic bubbles

https://philarchive.org/rec/NGUECA?utm_source=chatgpt.com

[*20]* Exposure to ideologically diverse news and opinion on ...

https://www.science.org/doi/10.1126/science.aaa1160?utm_source=chatgpt.com

[*21]* Filter Bubbles, Echo Chambers, and Online News Consumption

https://sethrf.com/files/bubbles.pdf?utm_source=chatgpt.com

[*22]* The echo chamber is overstated: the moderating effect of ...

https://www.tandfonline.com/doi/full/10.1080/1369118X.2018.1428656?utm_source=chatgpt.com

[*23]* Filter bubble

https://policyreview.info/concepts/filter-bubble?utm_source=chatgpt.com

[*24]* Epistemic Closure, Technology, and the End of Distance

https://www.juliansanchez.com/2010/04/07/epistemic-closure-technology-and-the-end-of-distance/?utm_source=chatgpt.com

[25]* [37]* [38]* [51]* Misconceptions, Misinformation, and the Logic of Identity- ...

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2973067&utm_source=chatgpt.com

[*26]* Auditing radicalization pathways on YouTube

https://dl.acm.org/doi/10.1145/3351095.3372879?utm_source=chatgpt.com

[*27]* Auditing Radicalization Pathways on YouTube

https://dlab.epfl.ch/people/west/pub/HortaRibeiro-Ottoni-West-Almeida-Meira_FAT-20.pdf?utm_source=chatgpt.com

[*28]* Right-Wing YouTube: A Supply and Demand Perspective

https://kmunger.github.io/pdfs/ijpp_youtube.pdf?utm_source=chatgpt.com

[29]* [59]* Echo Chambers, Rabbit Holes, and Ideological Bias

https://csmapnyu.org/research/reports-analysis/echo-chambers-rabbit-holes-and-ideological-bias-how-youtube-recommends-content-to-real-users?utm_source=chatgpt.com

[*30]* Alternative Influence

https://datasociety.net/library/alternative-influence/?utm_source=chatgpt.com

[*31]* TikTok's recommendations skewed towards Republican ...

https://arxiv.org/html/2501.17831v1?utm_source=chatgpt.com

[*32]* Toxic politics and TikTok engagement in the 2024 U.S. ...

https://misinforeview.hks.harvard.edu/article/toxic-politics-and-tiktok-engagement-in-the-2024-u-s-election/?utm_source=chatgpt.com

[33]* [73]* Algorithmic amplification of politics on Twitter

https://www.pnas.org/doi/10.1073/pnas.2025334119?utm_source=chatgpt.com

[*34]* An Audit of Twitter's Friend Recommender System

https://dl.acm.org/doi/fullHtml/10.1145/3614419.3643996?utm_source=chatgpt.com

[*36]* Systematic review: YouTube recommendations and ...

https://pmc.ncbi.nlm.nih.gov/articles/PMC7613872/?utm_source=chatgpt.com

[*39]* The Righteous Mind

https://en.wikipedia.org/wiki/The_Righteous_Mind?utm_source=chatgpt.com

[*41]* Negative Partisanship: Why Americans Dislike Parties But ...

https://www.stevenwwebster.com/negative-partisanship-rabid.pdf?utm_source=chatgpt.com

[*43]* THE FOOT-IN-THE-DOOR TECHNIQUE3 - Bulidomics

https://www.bulidomics.com/w/images/6/6c/Freedman_fraser_footinthedoor_jpsp1966.pdf?utm_source=chatgpt.com

[*44]* Cognitive Dissonance Theory - an overview

https://www.sciencedirect.com/topics/social-sciences/cognitive-dissonance-theory?utm_source=chatgpt.com

[*45]* The psychology of sunk cost

https://www.sciencedirect.com/science/article/pii/0749597885900494?utm_source=chatgpt.com

[*47]* Trump says Alex Pretti, man shot in Minnesota, should not have carried gun

https://www.reuters.com/world/us/trump-says-alex-pretti-should-not-have-carried-gun-that-was-allowed-under-2026-01-27/?utm_source=chatgpt.com

[48]* [49]* So what if Alex Pretti had a gun?

https://www.vox.com/politics/476515/alex-pretti-minneapolis-ice-cbp-gun-second-amendment?utm_source=chatgpt.com

[*50]* Statement From the Chair Regarding the Killing of Alex Pretti

https://www.lpf.org/alex_pretti?utm_source=chatgpt.com

[*52]* The Outrage Industry: Political Opinion Media and the New ...

https://api.pageplace.de/preview/DT0400.9780199928989_A23618511/preview-9780199928989_A23618511.pdf?utm_source=chatgpt.com

[*53]* Trading up the chain

https://mediamanipulation.org/definitions/trading-chain/?utm_source=chatgpt.com

[*54]* Online manipulation expert Renée DiResta: 'Conspiracy ...

https://www.theguardian.com/technology/article/2024/jul/14/renee-diresta-invisble-rulers-internet-algorithms-media-disinformation-ai?utm_source=chatgpt.com

[*55]* Americans' Changing Relationship With Local News

https://www.pewresearch.org/journalism/2024/05/07/americans-changing-relationship-with-local-news/?utm_source=chatgpt.com

[*56]* The rise of negative partisanship and the nationalization ...

https://www.sciencedirect.com/science/article/pii/S0261379415001857?utm_source=chatgpt.com

[*57]* False balance in news coverage of climate change makes it ...

https://news.northwestern.edu/stories/2022/07/false-balance-reporting-climate-change-crisis?utm_source=chatgpt.com

[*58]* Radio and the Rise of The Nazis in Prewar Germany *

https://academic.oup.com/qje/article-abstract/130/4/1885/1916582?utm_source=chatgpt.com

[*60]* Report of the Independent International Fact-Finding ...

https://www.ohchr.org/Documents/HRBodies/HRCouncil/FFM-Myanmar/A_HRC_39_64.pdf?utm_source=chatgpt.com

[*61]* How WhatsApp Fuels Fake News and Violence in India

https://www.wired.com/story/how-whatsapp-fuels-fake-news-and-violence-in-india?utm_source=chatgpt.com

[*62]* Right-wing WhatsApp Users in Brazil are Louder, More Active ...

https://www.mccormick.northwestern.edu/news/articles/2019/08/right-wing-whatsapp-users-in-brazil-are-louder-more-active-more-effective-birnbaum.html?utm_source=chatgpt.com

[65]* [75]* A digital media literacy intervention increases discernment ...

https://www.pnas.org/doi/10.1073/pnas.1920498117?utm_source=chatgpt.com

[*66]* The moderating effect of political interest and diverse media

https://ora.ox.ac.uk/objects/uuid%3A34c4d33f-7cee-4fc0-aaad-0ce5311ac0de/files/m988c6e898fd3b881e39fb291545f004e?utm_source=chatgpt.com

[*67]* DSA: Very large online platforms and search engines

https://digital-strategy.ec.europa.eu/en/policies/dsa-vlops?utm_source=chatgpt.com

[*69]* Making a Living in the Creator Economy: A Large-Scale ...

https://journals.sagepub.com/doi/10.1177/20563051231180628?utm_source=chatgpt.com

[*71]* Greater Internet use is not associated with faster growth in ...

https://www.pnas.org/doi/10.1073/pnas.1706588114?utm_source=chatgpt.com

[*72]* Exposure to opposing views on social media can increase ...

https://www.pnas.org/doi/10.1073/pnas.1804840115?utm_source=chatgpt.com


Gemini

Media Ecosystems, Algorithmic Radicalization, and Ideological Incoherence

1. Executive Summary

The information environment of the early 21st century has precipitated a fundamental crisis in democratic discourse. This crisis is not merely one of polarization—a divergence of policy preferences—but of ideological incoherence, where consistent political principles are abandoned in favor of transient, identity-based victories. This report examines the structural, economic, and psychological mechanisms that drive this phenomenon. By analyzing the intersection of profit-driven algorithmic curation, human cognitive vulnerabilities, and a fractured media landscape, we demonstrate how otherwise principled individuals are shepherded into extreme and internally contradictory positions.

Our analysis synthesizes data from social psychology, network science, and political theory, supported by case studies ranging from domestic incidents in the United States to mass violence in the Global South. Central to this investigation is the "Alex Pretti" incident of January 2026, which serves as a grim archetype of modern tribal epistemology: a scenario where lifelong libertarians found themselves defending state violence against a gun owner due to the overpowering influence of negative partisanship.

The findings suggest that the "marketplace of ideas" has been replaced by an "economy of outrage," where the most emotionally arousing content—regardless of accuracy or coherence—is systematically amplified. This ecosystem exploits Identity-Protective Cognition, creating a feedback loop where loyalty to the "tribe" supersedes commitment to objective reality. Furthermore, the report challenges the popular "filter bubble" hypothesis, arguing instead that epistemic closure—a psychological immunity to opposing views—is the primary driver of radicalization. Finally, we explore emerging interventions, such as "bridging algorithms," that offer a technical pathway toward mitigating these destructive dynamics.

2. Introduction: The Crisis of Coherence

In stable political systems, ideologies function as coherent frameworks for interpreting the world. A commitment to "limited government," for instance, typically predicts skepticism toward state surveillance, police militarization, and federal overreach. A commitment to "social justice" typically predicts concern for due process and the protection of marginalized groups from state power. However, the contemporary media ecosystem has eroded these consistent frameworks, replacing them with a fluid, reactive form of identity politics that scholars describe as "ideological incoherence."

This incoherence is characterized by the rapid abandonment of deeply held principles when they conflict with the immediate tactical needs of the political tribe. It is the ecosystem where "Free Speech" becomes a slogan invoked only to protect one's allies, and "Law and Order" is championed only when the laws are enforced against one's enemies.

This report posits that this dissolution of principle is not an accident of history but a predictable output of specific incentive structures. The Attention Economy, built on the monetization of user engagement, has industrialized the production of cognitive dissonance. Platforms are designed not to inform, but to provoke; not to bridge divides, but to deepen them for profit. When combined with the human psychological tendency toward Tribal Epistemology—where social belonging is valued over factual accuracy—the result is a citizenry that is increasingly radical, increasingly angry, and increasingly confused.

To understand this phenomenon, we must dissect the machinery of radicalization layer by layer: from the algorithmic code that governs our feeds to the neural pathways that govern our beliefs.

3. The Political Economy of Attention

3.1 The Algorithmic Imperative: Engagement Over Truth

The foundational distortion in modern discourse arises from the business models of the dominant digital platforms. Companies such as Meta (Facebook, Instagram), Alphabet (YouTube), and ByteDance (TikTok) operate on an advertising-based revenue model. The primary metric of success in this model is "engagement"—a composite measure of time spent, clicks, likes, shares, and comments.

Algorithms are optimized to maximize these metrics. This optimization is value-neutral in code but value-laden in effect. Research in social psychology has consistently demonstrated that content evoking high-arousal emotions spreads significantly faster and engages users longer than content evoking low-arousal emotions. Specifically, the most effective drivers of engagement are anger, fear, and moral outrage.

  • Emotional Contagion: Studies by Berger and Milkman established that "viral" content is almost invariably emotionally charged. In the political sphere, this translates to a preference for content that frames the "other side" not merely as wrong, but as an existential threat.
  • The Outrage Loop: When a user engages with outrage-inducing content (even to criticize it), the algorithm interprets this interaction as a signal of interest. The feed then adapts to provide more of the same stimulus. This creates a self-reinforcing loop where a user's feed becomes progressively more extreme, not because the user explicitly requested radical content, but because the user’s nervous system responded to it.

3.2 The Monetization of Conflict

This dynamic is not limited to social media; it has infected legacy media as well. Facing declining viewership and fierce competition from digital upstarts, cable news networks have adopted the metrics of the attention economy. The "CNN model" under executives like Jeff Zucker shifted away from traditional journalism toward "conflict-as-entertainment."

The televised town hall, the partisan panel debate, and the "breaking news" chyron are designed to mimic the dopamine loops of social feeds. By treating politics as a sport—complete with teams, scoreboards, and "game-changing" moments—mainstream outlets validate the binary, adversarial framework of the internet. The profit motive dictates that complexity, which requires cognitive effort and often leads to disengagement, must be discarded in favor of drama, which creates suspense and loyalty.

3.3 Comparative Incentives in Media Models

FeatureLegacy Media (Pre-Internet)Digital Attention Economy
Primary MetricSubscriptions / Nielsen RatingsMicro-Engagement (Clicks, Time-on-Site)
GatekeepersEditors, Producers (Human)Algorithms (Black Box)
PacingDaily / Hourly CycleContinuous / Infinite Scroll
IncentiveBroad Appeal (Aggregation)Niche Activation (Segmentation)
Content BiasConsensus / "View from Nowhere"Polarization / "View from Somewhere"
Cost of EntryHigh (Capital Intensive)Zero (Democratized)

4. Epistemic Enclosures: The Architecture of Isolation

4.1 Re-evaluating the Filter Bubble

Since 2011, the concept of the "filter bubble"—coined by Eli Pariser—has been the dominant metaphor for online polarization. The theory posits that algorithms invisibly curate a user's world, filtering out opposing viewpoints until the user is encased in a personalized reality. While compelling, empirical research suggests this model is technologically deterministic and often overstated.

Recent studies indicate that algorithmic curation often exposes users to more diverse viewpoints than they would select for themselves. Search engines and news aggregators, driven by popularity signals, frequently inject mainstream or contrasting content into a user's feed. The absolute isolation predicted by the filter bubble hypothesis—where a conservative never sees a liberal argument—is statistically rare, affecting perhaps less than 10% of the population.

4.2 Epistemic Closure and the "Anti-Bubble"

The failure of the filter bubble hypothesis to explain polarization points to a more disturbing reality: Epistemic Closure. The problem is not that users are unaware of opposing arguments; it is that they are immunized against them.

In modern online communities, opposing viewpoints are frequently circulated, but they are contextualized in a way that strips them of legitimacy. A tweet from an opposing politician is not hidden; it is "quote-tweeted" or screenshotted for the purpose of mockery. The user sees the argument, but only through the lens of their own tribe's derision. This creates an "anti-bubble," where the outside world is visible but interpreted entirely as a hostile caricature.

This dynamic explains why exposing partisans to opposing views often backfires, deepening their polarization rather than reducing it. The exposure is processed not as an opportunity for dialogue, but as an attack that triggers defensive cognitive mechanisms. The enclosure is psychological, not algorithmic.

5. The Cognitive Infrastructure of Belief

5.1 Identity-Protective Cognition

To understand how individuals maintain contradictory beliefs, we must turn to the work of Dan Kahan and the theory of Identity-Protective Cognition (IPC). IPC posits that individuals process information primarily to protect their status within their defining social group. In a hyper-polarized environment, political affiliation is not just a preference; it is a fundamental identity marker, akin to religion or ethnicity.

When a fact threatens the group's worldview—for example, a libertarian confronted with data supporting state intervention, or a progressive confronted with data complicating a narrative of systemic oppression—the individual experiences visceral dissonance. To resolve this, the mind employs "motivated reasoning" to dismiss, reinterpret, or discredit the inconvenient fact.

Crucially, Kahan’s research reveals a paradox: intelligence is not a defense. Individuals with higher cognitive ability and science literacy are often more polarized, not less. Their enhanced cognitive tools allow them to construct more sophisticated rationalizations for their tribe's positions. They are better at mental gymnastics. Thus, the "smartest" people in the room are often the most adept at maintaining ideological incoherence.

5.2 Tribal Epistemology

David Roberts codified this phenomenon as Tribal Epistemology. In this epistemic framework, the truth value of a statement is determined not by evidence, but by its utility to the tribe.

  • "True" = That which supports Us.
  • "False" = That which supports Them.

This shift creates a "rally around the flag" effect for partisan leaders. If a leader shifts positions (e.g., on trade, foreign policy, or public health), the tribe follows, rewriting their own ideological history in real-time. To dissent is to risk excommunication. This explains the fluidity of modern political coalitions, where positions that were anathema five years ago are now orthodoxy.

5.3 Negative Partisanship and Anti-Identity

The engine driving this tribalism is often hatred rather than love. Negative Partisanship describes a political identity defined primarily by opposition to the "other." Research indicates that for a significant plurality of voters, the motivation to defeat the opposing party is stronger than the attachment to their own party's platform.

This "anti-identity" produces profound incoherence. If the opposition supports "X," the negative partisan must support "Anti-X," regardless of whether "Anti-X" aligns with their previous values. This reactionary positioning leads to a politics of pure negation, where the only consistent principle is that the enemy is wrong.

6. The Mechanics of Radicalization

6.1 The "Rabbit Hole" and Algorithmic Pathways

The mechanism by which users move from mainstream to extreme content—often termed the "Rabbit Hole"—has been the subject of intense scrutiny. The "Alt-Right Pipeline" model suggests a linear algorithmic progression: a user watches a mainstream conservative video, the recommendation engine suggests a slightly more provocative "IDW" (Intellectual Dark Web) video, and eventually, the user is served white nationalist content.

However, the reality is more complex. Recent platform studies suggest that while algorithms facilitate this journey, they do not solely drive it. Users often self-radicalize, actively seeking out confirmation for pre-existing biases. The algorithm serves as an obliging librarian, fetching increasingly specific extremist texts for a user who is already asking for them. This "demand-side" radicalization challenges the notion that users are passive victims of code.

6.2 Gateway Influencers and Parasocial Bonds

A critical component of this pipeline is the Gateway Influencer. These figures occupy the "grey zone" between mainstream respectability and fringe extremism. They often frame themselves as "skeptics," "rationalists," or "free thinkers" who are merely "asking questions" that the mainstream media ignores.

Because these influencers build strong parasocial relationships with their audience—often communicating directly through long-form podcasts or livestreams—they bypass the skepticism filters that users might apply to traditional media. When a trusted "gateway" figure introduces a radical concept (e.g., scientific racism or election denialism), the audience is primed to accept it as "hidden truth." The intimacy of the medium creates a trust transfer from the person to the ideology.

6.3 Audience Capture: The Trap of Escalation

The influencer economy creates a dangerous feedback loop known as Audience Capture. Content creators, reliant on audience engagement for their livelihood, constantly monitor which takes receive the most positive reinforcement. When a creator expresses a more radical or aggressive opinion and receives a spike in engagement, they are economically incentivized to repeat and escalate that behavior.

Over time, the creator becomes a prisoner of their own audience's extremism. If they attempt to moderate their views or introduce nuance, they face immediate backlash, loss of subscribers, and financial penalty. To survive, they must radicalize alongside—or slightly ahead of—their followers. This explains the trajectory of numerous public intellectuals and pundits who have drifted rapidly toward extremism over the past decade.

7. Case Study: The Incoherence of the Alex Pretti Incident

The theoretical frameworks of tribal epistemology and negative partisanship find a devastating real-world application in the killing of Alex Pretti. This incident, occurring in January 2026, serves as a stress test for ideological consistency in the American political landscape.

7.1 The Incident: A Collision of Narratives

On January 24, 2026, Alex Jeffrey Pretti, a 37-year-old registered nurse, was shot and killed by federal agents (ICE) in Minneapolis. Pretti was participating in a protest sparked by the earlier killing of another citizen, Renee Good. At the time of his death, Pretti was legally armed with a holstered pistol—a right he possessed as a licensed gun owner—and was filming the agents.

Under a consistent libertarian or conservative framework, Pretti checks every box for a cause célèbre:

  1. Gun Rights: He was exercising his Second Amendment right to bear arms.
  2. Limited Government: He was protesting federal overreach and the militarization of police.
  3. Self-Defense: He was a private citizen facing armed state agents.

7.2 The Inversion: "Back the Blue" vs. "Don't Tread on Me"

Yet, the reaction from the political right and the "Mises Caucus" wing of the libertarian movement revealed a profound fracture. Instead of rallying to Pretti's defense, many voices in the "gun rights" community justified the shooting. Rhetoric shifted seamlessly to a "Law and Order" frame: Pretti was blamed for "provoking" agents, for bringing a gun to a volatile situation, or for being associated with "rioters".

This reaction is intelligible only through the lens of Negative Partisanship. Because the protest was directed against ICE—an agency coded as "ours" by the populist right and "theirs" by the left—Pretti was categorized as an out-group member. Once identified as "enemy," his rights as a gun owner were nullified in the minds of tribal partisans. The "Back the Blue" identity overrode the "Second Amendment" identity.

7.3 The Libertarian Schism

The incident precipitated a schism within the Libertarian Party. The Georgia chapter adhered to principle, condemning the shooting as authoritarian overreach. However, the silence or apologetics from other factions highlighted the "enemy-of-my-enemy" logic. For those whose primary political motivation is opposition to the "Left," any force that suppresses the Left—even the federal government—becomes an ally. This case study demonstrates that in a tribalized environment, there are no universal rights; there are only rights for "Us," and state violence for "Them."

8. Global Comparative Media Ecosystems

The dynamics observed in the US are mirrored, often with more lethal consequences, in the Global South. These cases highlight how specific platform affordances—the technical features of an app—interact with local political contexts to produce unique forms of radicalization.

8.1 Brazil: WhatsApp and the Hidden Radicalization

In Brazil, the primary vector for radicalization is WhatsApp. Unlike the "public square" of Twitter/X, WhatsApp operates as a network of private, encrypted silos. This "dark social" architecture has profound implications:

  • The Pyramid of Trust: During the 2018 election of Jair Bolsonaro, supporters utilized a sophisticated pyramid structure. "Public" groups served as recruitment centers, disseminating memes and conspiracy theories. Highly engaged users were then invited into elite, "private" groups where radicalization deepened.
  • Viral Opacity: The "forward" feature (prior to the imposition of limits) allowed disinformation to spread exponentially across millions of users without any possibility of public debunking or fact-checking. A rumor about a political opponent (e.g., the "gay kit" hoax) could saturate the electorate before journalists were even aware of its existence.
  • Encryption as Shield: Because the content is end-to-end encrypted, it is invisible to platform moderators. This creates a zone of absolute impunity for bad actors, where radicalization occurs in a black box.

8.2 India: The lethal Mechanics of "Rumor"

In India, the WhatsApp ecosystem has been weaponized to incite mob violence. The "relational" nature of the app—where messages are received from friends, family, and neighbors—bypasses critical filters.

  • The "Child Kidnapper" Panic: Viral rumors alleging that gangs of child kidnappers were roaming villages led to dozens of lynchings of innocent strangers. These rumors were often accompanied by manipulated videos and urgent calls to "protect the community."
  • IT Cells and Sectarianism: Political parties have industrialized this process through "IT Cells"—vast networks of paid and volunteer operatives who coordinate the simultaneous dissemination of sectarian narratives. This infrastructure allows for the rapid mobilization of hate speech against religious minorities, turning digital misinformation into kinetic violence within hours.

8.3 Myanmar: The Genocide Algorithm

In Myanmar, Facebook's dominance was total; for many citizens, Facebook was the internet. This monopoly, combined with a lack of localized content moderation, created a catastrophic failure mode.

  • Dehumanization at Scale: Military officials and nationalist monks used the platform to systematically dehumanize the Rohingya Muslim minority, referring to them as "fleas" or "dogs."
  • The "Zero-Rating" Trap: Programs that allowed free access to Facebook (but charged for access to the rest of the web) meant that users could not verify information on external news sites. They were trapped in a single, manipulated feed.
  • Result: The platform became the command-and-control infrastructure for a genocide, facilitating the organization of pogroms and the expulsion of hundreds of thousands of people. This case stands as the bleakest warning of what happens when a platform optimizes for engagement in a fragile ethnic context without adequate safeguards.

9. Historical Resonance: From Radio to Reddit

While the speed of digital radicalization is new, the dynamic is historically precedented. The introduction of radio in the 1930s offers a striking parallel to the rise of social media.

9.1 Father Coughlin: The Original "Influencer"

Father Charles Coughlin, the "Radio Priest" of the 1930s, commanded an audience of 30 million Americans. His career presaged the modern influencer model in every particular:

  • Bypassing Gatekeepers: Coughlin used the new technology of radio to speak directly to the people, bypassing the editorial control of newspapers.
  • Parasocial Loyalty: He built an intensely loyal following that sent him millions of letters and small donations, creating an independent financial base that insulated him from church and state pressure.
  • The Radicalization Arc: Like modern pundits, Coughlin began with populist economic critiques of the Great Depression but gradually drifted into virulent antisemitism and pro-fascist rhetoric. He spun conspiracy theories about "international bankers" that mirror the "globalist" narratives of today.
  • The "Fake News" Defense: When criticized by the press, he attacked the institutions of journalism themselves, framing them as controlled by the very enemies he was exposing.

9.2 The Lesson of the 1930s

The lesson from this era is that new media technologies destabilize democratic norms before society can develop the "antibodies" to manage them. It took a World War and the subsequent implementation of regulations (like the Fairness Doctrine) to re-stabilize the information ecosystem. We are currently in the "Coughlin phase" of the internet: the technology has empowered demagogues, and our regulatory and social immune systems have not yet caught up.

10. The Gamification of Terror

A unique and terrifying innovation of the digital age is the gamification of violence. This phenomenon describes how extremist actors repurpose the aesthetics, logic, and incentive structures of video games to promote real-world terrorism.

10.1 The "High Score" Logic

Beginning with the Christchurch shootings in 2019, attackers have increasingly treated mass murder as a performative quest.

  • Aesthetic Mimicry: Attackers use helmet cameras to livestream their atrocities, deliberately mimicking the visual perspective of "First-Person Shooter" (FPS) games like Call of Duty.
  • Leaderboards and Achievements: On fringe message boards like 8chan, users treat body counts as "high scores." They create "achievements" for attackers (e.g., specific types of kills) and debate the "tactical efficiency" of different loadouts.
  • The Copycat Incentive: This creates a competitive dynamic where each attacker seeks to outdo the previous one, either in casualties or in the "production value" of their livestream. The Buffalo shooter explicitly referenced the Christchurch attacker's "score" and manifesto, framing his own act as a continuation of the same game.

10.2 "Red-Pilling" as Gameplay

The process of radicalization itself is often framed as a game. The term "Red Pill"—referencing The Matrix—turns ideological conversion into an unlockable achievement. Online communities develop guides on "how to hide your power level" (conceal extreme views) and "how to red-pill normies." This gamification engages young men by offering them a sense of agency, secret knowledge, and progression that is often lacking in their offline lives. It transforms the horrific act of radicalization into a rewarding social activity.

11. Interventions and Future Trajectories

Can this ecosystem be fixed? The search for solutions spans regulatory, technical, and educational domains.

11.1 Regulatory Reform: The Section 230 Debate

In the United States, the legislative focus is on Section 230 of the Communications Decency Act, which shields platforms from liability for user-generated content. Proposals to "sunset" or reform this law aim to force platforms to take greater responsibility for the harms they facilitate. However, experts warn of unintended consequences: removing immunity could lead to massive over-censorship as platforms scrub any controversial content to avoid lawsuits, effectively killing the open internet. Alternatively, it could entrench the dominance of Big Tech, as only the largest companies could afford the necessary compliance teams.

11.2 Bridging Systems: A Technical Solution?

A more promising avenue lies in changing the algorithms themselves. Current recommender systems are "blind" to the social impact of the content they promote—they optimize only for engagement. Bridging Algorithms, proposed by researchers at the Knight Institute, introduce a new metric: "cross-partisan appeal."

  • The Mechanism: Instead of amplifying posts that are loved by one side and hated by the other (divisive content), a bridging algorithm amplifies posts that receive positive engagement from both sides of a divide.
  • The Outcome: Experiments with systems like "Polis" in Taiwan and "YourView" in Australia show that this approach can surface consensus and reduce affective polarization. It artificially subsidizes the "middle ground" that the current economy suppresses.

11.3 The Limits of Media Literacy

Finally, while "media literacy" is often touted as the solution, standard approaches can be ineffective or even counter-productive. Teaching people to "fact-check" or "question sources" often fails in the face of Identity-Protective Cognition. Indeed, it can lead to a weaponized skepticism where individuals learn to hyper-scrutinize only opposing views while giving their own side a pass. Future literacy interventions must focus on emotional skepticism—teaching users to recognize when their own outrage is being manipulated—rather than just cognitive fact-checking.

12. Conclusion: Surviving the Information War

The evidence presented in this report suggests that ideological coherence is a casualty of the current information war. The combination of profit-driven outrage optimization, the "gamification" of extremism, and the deep-seated tribalism of the human mind has created a perfect storm for incoherence. The "Alex Pretti" case is not an aberration; it is the new normal—a world where principles are discarded the moment they become inconvenient for the tribe.

However, fatalism is unwarranted. The history of media suggests that society eventually adapts to disruptive technologies. The radio demagogues of the 1930s were eventually checked by new norms and regulations. Today, the emergence of Bridging Systems and a deeper understanding of Tribal Epistemology offer a roadmap out of the crisis. The challenge for the survival of liberal democracy is to redesign our digital architecture to reward bridge-building over bridge-burning, and to foster an citizenry that values the integrity of its principles more than the victory of its tribe.

Summary of Key Mechanisms

MechanismDescriptionImpact on Coherence
Outrage OptimizationAlgorithms prioritize anger/fear to maximize ad revenue.Displaces nuance; rewards extreme, binary framing.
Identity-Protective CognitionUnconscious rejection of facts that threaten group status.Allows individuals to hold contradictory beliefs to stay in the tribe.
Negative PartisanshipIdentity defined by hatred of the "other."leads to "Anti-Identity" politics (supporting X just because they hate X).
Epistemic ClosurePsychological immunization against outside views.Makes communities impervious to correction or fact-checking.
Gamification of ViolenceApplying game logic (scores, achievements) to terror.Trivializes violence; incentivizes copycat escalation.
Bridging AlgorithmsSystems that reward cross-partisan consensus.Potential intervention to reduce polarization and surface common ground.

Claude

How profit-driven media creates feedback loops driving ideological extremism

The profit-driven media ecosystem—encompassing both algorithmic social platforms and cable news—systematically rewards outrage, tribal signaling, and emotional content while punishing nuance. This report synthesizes peer-reviewed research demonstrating that individuals can be incrementally led to positions their past selves would find abhorrent through a convergent set of psychological, economic, and technological mechanisms. The core dynamic is straightforward: engagement-based business models profit from provocation; algorithms amplify moral-emotional content because it generates engagement; and human psychology—evolved for small-group social signaling—proves exploitable at scale. Whether liberal democracy can survive an information ecosystem optimized for division remains an open and urgent question.


The attention economy monetizes outrage

The business model underlying both social media and cable news creates systematic incentives for provocation over accuracy. Platforms offer free services funded by advertising, meaning user attention is the product. Algorithms optimize for engagement (time on site, clicks, shares) because engagement equals advertising impressions equals revenue. Research consistently demonstrates that high-arousal emotional content—particularly outrage—generates the highest engagement.

Berger and Milkman's foundational 2012 study in the Journal of Marketing Research analyzed New York Times articles and found that physiological arousal, not emotional valence, drives virality. High-arousal emotions like anger and anxiety increase sharing; low-arousal emotions like sadness decrease it. Brady et al.'s 2017 PNAS study of 563,312 social media messages found that each moral-emotional word increased diffusion by 20%, with effects bounded by group membership—moral-emotional language spreads within ideological networks rather than between them.

The most striking finding comes from Rathje, Van Bavel, and van der Linden's 2021 PNAS analysis of 2.73 million posts: content referencing the political out-group was shared approximately twice as often as in-group content, with each out-group term increasing sharing odds by 67%. Out-group language proved 4.8 times stronger than negative affect and 6.7 times stronger than moral-emotional language as predictors of engagement. Milli et al.'s 2023 algorithmic audit confirmed these dynamics operate through platform design: 62% of political tweets selected by Twitter's algorithm expressed anger versus 52% in chronological feeds; 46% contained out-group animosity versus 38% baseline.

Vosoughi, Roy, and Aral's landmark 2018 Science study of ~126,000 news stories found falsehood diffused "significantly farther, faster, deeper, and more broadly than truth in all categories," with the top 1% of false cascades reaching 1,000-100,000 people while truth rarely exceeded 1,000. Critically, humans—not bots—drove this asymmetry; people spreading false news had fewer followers, meaning the content itself generated virality.

McLoughlin et al.'s 2024 Science study demonstrated that misinformation exploits this dynamic: misinformation sources evoke more outrage than trustworthy sources, and users share outrage-evoking misinformation without reading it first. When people are in an outrage state, their discernment "goes out the window."


Filter bubbles exist but are smaller than commonly believed

Eli Pariser's 2011 "filter bubble" thesis—that personalization algorithms create isolated information universes—has become culturally influential but empirically overstated. The scholarly consensus, summarized in a 2022 Reuters Institute literature review, is that politically partisan echo chambers are "generally small—much smaller than often assumed" and that algorithmic ranking leads to "slightly more diverse news"—"the opposite of what the filter bubble hypothesis posits."

Gentzkow and Shapiro's 2011 Quarterly Journal of Economics study found that ideological segregation of online news consumption was significantly lower than face-to-face interactions with neighbors, coworkers, or family. Bakshy, Messing, and Adamic's 2015 Science study of 10.1 million Facebook users found that individual user choices played a stronger role than algorithmic ranking in limiting cross-cutting exposure: algorithms reduced cross-cutting content by only 5-8%, while user click choices reduced it by 70%.

This doesn't mean echo chambers don't matter—only that user choice rather than algorithmic imposition primarily drives homogeneous information diets. Flaxman, Goel, and Rao's 2016 analysis found social media and search engines were actually associated with increased exposure to material from users' less-preferred political side. The implication for radicalization research is significant: active seeking of extreme content may matter more than passive algorithmic filtering.


The radicalization pipeline is contested but real for vulnerable populations

Academic research on YouTube's "rabbit hole" phenomenon presents a genuinely contested picture, with major studies reaching opposing conclusions depending on methodology.

Evidence for algorithmic radicalization pathways:

Ribeiro et al.'s 2020 FAT conference study analyzed 330,925 videos across 349 channels and found users "consistently migrate from milder to more extreme content over time," with Alt-lite content easily reachable from Intellectual Dark Web channels via recommendations. Rebecca Lewis's 2018 Data & Society report documented an "Alternative Influence Network" of 65 political influencers spanning mainstream conservatism to overt white nationalism, connected through cross-promotion and guest appearances.

Evidence against universal radicalization:

Hosseinmardi et al.'s 2021 PNAS study using a representative panel of 300,000+ Americans found that news consumption on YouTube was "dominated by mainstream, largely centrist sources," with far-right consumers representing a "small and stable percentage." Crucially, 55% of far-right video referrals came from external URLs, homepage, or direct searches—not recommendations. Their 2024 follow-up using counterfactual bots concluded that "relying exclusively on the recommender results in less partisan consumption." Munger and Phillips's 2022 International Journal of Press/Politics study found viewership of far-right videos peaked in 2017 and declined before YouTube's algorithm changes—suggesting demand preceded supply.

The reconciliation may be that algorithms have limited effects on typical users but potentially significant effects on vulnerable individuals actively seeking extreme content. A systematic review found 14 of 23 studies implicated YouTube's recommender in problematic content pathways, while 7 showed mixed results.

Platform-specific dynamics differ markedly:

TikTok's short-form video format appears to accelerate exposure cycles. Shin and Jitkajornwanich's 2024 Social Science Computer Review algorithm audit found "manifold" pathways to far-right content with a "large portion ascribed to platform recommendations." A West Point Combating Terrorism Center 2025 report documented TikTok functioning as a "low-threshold gateway" into extremist ecosystems, with the Vienna Taylor Swift concert terror plot illustrating real-world consequences.

Piccardi et al.'s 2025 Science study provided causal evidence on Twitter/X: using a browser extension that reranked feeds in real-time, they found altering exposure to hostile content changed affective polarization by approximately 2 degrees on feeling thermometers—equivalent to roughly 3 years of natural polarization change.


Identity-protective cognition makes team loyalty override evidence

Dan Kahan's Cultural Cognition Project at Yale has produced the most rigorous research on how group identity shapes factual perception. His central finding: "The members of the public most adept at avoiding misconceptions of science are nevertheless the most culturally polarized." Higher cognitive proficiency and scientific literacy produce more polarization, not less, because these skills enable more effective motivated reasoning.

The mechanism is what Kahan terms "identity-protective cognition"—the tendency to selectively credit evidence that confirms group beliefs and dismiss evidence that contradicts them. This isn't irrationality but rather rational protection of social identity: the psychological costs of holding beliefs that conflict with one's cultural community outweigh abstract benefits of accuracy. As Kahan explains: "Nobody can do anything about climate change individually. But if they make a mistake in their own community, given that climate change has become a symbol of group loyalty, they could be in a lot of trouble."

Geoffrey Cohen's landmark 2003 Journal of Personality and Social Psychology study demonstrated this starkly: "Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one's political party. This effect overwhelmed the impact of both the policy's objective content and participants' ideological beliefs." Liberals supported stringent welfare policies when told Democrats endorsed them; conservatives supported generous policies when told Republicans backed them. Most tellingly, participants denied having been influenced while believing their ideological adversaries would be.


Gradual radicalization exploits commitment and consistency

The psychological mechanisms driving incremental radicalization are well-established: the foot-in-the-door technique, cognitive dissonance, and sunk cost fallacy operate synergistically.

Freedman and Fraser's classic 1966 study established that small initial commitments change self-perception. Homeowners who agreed to display a small "Be a Safe Driver" sign showed 76% compliance with a later request for a large, ugly sign, versus only 17% when asked directly. The mechanism is self-perception shift: "I am the kind of person who supports this cause."

Festinger's cognitive dissonance theory explains why public advocacy creates belief change. People experience psychological discomfort holding contradictory cognitions and are motivated to reduce this discomfort—often by changing attitudes to match behavior. Festinger and Carlsmith's 1959 finding that participants paid $1 to lie rated a boring task as more enjoyable than those paid $20 demonstrated how insufficient external justification drives internalization.

Arkes and Blumer's 1985 research on sunk cost showed people have "a greater tendency to continue an endeavor once an investment in money, effort, or time has been made." Applied to ideology: the more someone has publicly defended a position, the more psychologically costly abandoning it becomes. A 1976 study found business students who made adverse investment decisions were more likely to commit additional resources—prior mistakes increased rather than decreased future commitment.


Negative partisanship makes opposition define identity

Abramowitz and Webster's research documents one of the most important developments in American politics: "the rise of negative partisanship—the phenomenon whereby Americans largely align against one party instead of affiliating with the other."

Using American National Election Studies data, they found average ratings of the opposing party dropped from 45 degrees (1980) to 30 degrees (2012) on feeling thermometers, while own-party ratings remained stable. The consequence: dramatic increases in party loyalty and straight-ticket voting driven by hatred of the other side rather than affection for one's own.

Iyengar and Westwood's 2015 American Journal of Political Science scholarship experiment demonstrated behavioral consequences: when evaluating candidates for a scholarship, 79.2% of Democrats picked the Democratic applicant and 80% of Republicans picked the Republican—even when the out-party candidate had a significantly higher GPA (4.0 vs 3.5). The probability of selecting a more qualified out-party candidate never exceeded 30%.

Implicit Association Test data shows partisan bias is now more widespread than racial bias: approximately 70% of partisans show implicit bias favoring their party, with D-scores averaging 0.50 for partisan versus 0.18 for racial bias. Americans increasingly report being averse to their child marrying someone from the opposing party—rising from 4-5% in 1960 to one-third of Democrats and one-half of Republicans by 2010.

This produces ideological incoherence: when identity is defined primarily by opposition, policy positions become instrumental to tribal victory rather than expressions of principle. Voters follow party cues even when they contradict initial vote intentions and stated policy preferences.


How someone arrives at positions their past self would find abhorrent

The research reveals a multi-stage pathway:

Stage 1 - Initial small commitments: The foot-in-the-door effect establishes disposition toward "helpfulness" to one's group. Self-perception shifts: "I am the kind of person who supports this cause."

Stage 2 - Public advocacy creates dissonance pressure: Having publicly defended positions, individuals experience cognitive dissonance when confronted with contradictory evidence. The path of least resistance is attitude change to match behavior.

Stage 3 - Sunk cost investment: As public advocacy accumulates, abandoning defended positions becomes increasingly psychologically costly. "Humans want to be seen as consistent. Changing course feels like we have to admit we've made a mistake. It's easier sometimes to double down."

Stage 4 - Identity-protective cognition prevents honest evaluation: High-arousal content and tribal signaling activate identity-protective mechanisms. Evidence is processed not for accuracy but for group implications.

Stage 5 - Negative partisanship reframes principle abandonment as enemy defeat: When identity is defined by opposition rather than positive values, abandoning previously held principles can be rationalized as necessary to defeat the out-group.

Stage 6 - Audience capture completes the transformation: For those with public platforms, followers' expectations create external reinforcement for increasingly extreme positions. Writer Gurwinder Bhogal documents this as "the gradual and unwitting replacement of a person's identity with one custom-made for the audience."


Cable news and local news collapse nationalize conflict

Research demonstrates cable news has larger polarization effects than social media. Hosseinmardi et al.'s 2022 Stanford/Penn/Microsoft study found up to 23% of Americans were polarized via TV at peak (November 2016), with left-leaning TV audiences 10 times more likely to remain segregated than online audiences.

Martin and Yurukoglu's American Economic Review study found Fox News increased Republican vote shares by 0.3 percentage points among viewers induced to watch 2.5 additional minutes weekly, with the effect growing from 2000-2008 due to both increasing viewership and increasingly conservative slant.

The collapse of local news has nationalized political information. Since 2004, 2,900+ newspapers have closed; half of US counties have only one outlet or less. Martin and McCrain's American Political Science Review study of Sinclair Broadcast Group acquisitions found substantial increases in national politics coverage at the expense of local coverage, with significant rightward ideological shifts.

The economic logic is documented by Harvard/MIT researchers: when cable news covers culture war issues, they gain audience from entertainment viewers; when they cover economics, people switch channels. Outrage entertainment is simply more profitable than informative journalism.


The influencer economy rewards extremism through audience capture

Content creators face systematic incentives toward increasingly extreme positions. Center for Democracy and Technology research found political content from influencers had 50-70% higher engagement than non-political content. This engagement premium drives the business model.

The "audience capture" phenomenon, coined by Eric Weinstein and popularized by Gurwinder Bhogal, describes how creators become "crude caricatures of themselves" as they calibrate to the most responsive feedback. Bhogal documents multiple cases: Maajid Nawaz evolved from careful counter-terrorism expert to conspiracy theorist writing about "shadowy New World Order"; Louise Mensch transformed from Conservative politician to concocter of increasingly speculative Trump-Russia theories; Dave Rubin shifted from progressive Young Turks host to Blaze TV personality.

The 2024 DOJ indictment of Tenet Media revealed Tim Pool, Dave Rubin, Benny Johnson, and others received nearly $10 million from Russian state media—demonstrating how ideological drift can align with external incentive structures even when creators claim to be unaware.

Cohen and Holbert's 2021 Communication Research study found parasocial relationships proved to be "a powerful predictor of Trump-Support, outperforming all other predictors including past voting behavior." These parasocial bonds insulate followers from counter-evidence and create audience expectations that further constrain creator behavior.


Historical parallels illuminate what's new

Media-driven radicalization has historical precedent. Thomas Paine's "Common Sense" reached approximately 1 in 5 colonial Americans; Father Coughlin attracted 30 million weekly radio listeners (25% of the US population) in the 1930s.

Tianyi Wang's American Economic Review study found a one standard deviation increase in exposure to Coughlin's anti-FDR broadcast reduced Roosevelt's vote share by approximately two percentage points in 1936. Adena et al.'s quantitative study of Nazi Germany found that after the Nazis seized control of radio in January 1933, their propaganda produced a 1.2 percentage point increase in Nazi vote share. Goebbels proclaimed radio the "eighth great power": "It would not have been possible for us to take power or to use it in the ways we have without the radio."

Contemporary global cases reveal common patterns. In Myanmar, Facebook played what the UN described as a "significant role" in the Rohingya genocide, with the platform functioning as the de facto internet as SIM card costs dropped from ~$1,000 to ~$1. A 2016 internal Facebook study found 64% of all extremist group joins were due to recommendation tools. In Brazil, 86% of false content during the 2018 election benefited Bolsonaro. In the Philippines, Facebook's Katie Harbath called the country "patient zero" in the global misinformation epidemic.

What's genuinely new: algorithmic curation means machines rather than humans primarily determine amplification; opacity makes these decisions invisible to users; feedback loops create self-reinforcing cycles; and speed and scale are unprecedented. Content that took days or weeks to distribute via pamphlets now spreads in minutes to billions.


The paradox of "free thinking" communities becoming credulous

Communities that pride themselves on skepticism frequently become highly credulous toward conspiracy theories—a phenomenon documented across the New Atheist movement, rationalist communities, and the "Intellectual Dark Web."

The mechanism involves asymmetric skepticism: in their quest to "question everything," conspiracy theorists "frequently accept unverified or false information from alternative sources without the same level of scrutiny they apply to mainstream narratives." The need for uniqueness and significance drives belief: conspiracy theories "highlight for believers their grievance and a culprit responsible for that grievance."

Van Prooijen et al.'s 2022 research found that suspicion of institutions "reduces trust between strangers, within-group cooperation... and increases prejudice, intergroup conflict, polarization, and extremism." This creates fertile ground for authoritarian manipulation: "the more that people lack confidence and trust in institutions, the more they're willing to buck norms and ignore institutions when it's good for their side."

Research on "crank magnetism"—the tendency for people who believe one conspiracy theory to believe many—suggests an underlying attraction to fringe beliefs independent of specific claims. Studies show COVID-19 conspiracy beliefs predicted later acceptance of Ukraine war conspiracy theories, functioning as "gateway theories."


Platform research reveals known harms and deliberate inaction

Frances Haugen's 2021 disclosures documented Facebook's internal awareness of platform harms. Internal research found 13.5% of teen girls said Instagram made thoughts of suicide worse; 17% said it worsened eating disorders. A 2016 presentation stated 64% of all extremist group joins came from recommendation tools. A 2017 task force found correlation between maximizing engagement and increasing polarization but concluded "reducing polarization would mean taking a hit on engagement."

Key scholars offer different emphases. Yochai Benkler (Harvard) argues in Network Propaganda that the right-wing media ecosystem operates "fundamentally differently" than the rest of the media environment, characterized by insularity and disconnection from professional journalistic norms. He contends social media is not the primary driver—seeds were planted with talk radio (1988) and Fox News (1996). Renée DiResta (formerly Stanford Internet Observatory) emphasizes that disinformation is an ecosystem problem where "people play as big a role as algorithms." Jonathan Haidt argues social media is "a major cause" of adolescent mental health crisis, though critics note his claims are not fully supported by available evidence.

The scholarly consensus on social media's causal role remains contested. Levy's 2021 American Economic Review field experiment found Facebook's algorithm "may limit exposure to counter-attitudinal news and thus increase polarization." But four 2023 Science/Nature studies found replacing algorithmic feeds with reverse-chronological feeds "did not significantly impact political polarization"—contradicting arguments that algorithms create filter bubbles beyond users' existing selection bias.

The emerging consensus: social media is likely a facilitator and amplifier of polarization rather than the root cause, with effects that are real but modest and concentrated among certain subgroups rather than universal.


What distinguishes people who resist radicalization

Research identifies several protective factors:

Media literacy and source diversity: People who consume news from multiple sources across the ideological spectrum show lower polarization. However, Bail et al.'s 2018 PNAS study found that exposure to opposing views can sometimes increase polarization among strong partisans—suggesting the manner of exposure matters.

Lower political engagement: Paradoxically, people less engaged with politics show less affective polarization. The most polarized are the most politically attentive.

Weak partisan identity: Those with weaker partisan identification are less susceptible to party-over-policy effects. Research on negative partisanship found effects were strongest among strong partisans.

Epistemic humility: Resistance to the certainty that characterizes both extremism and audience capture. Research on the contrarian-to-crank pipeline suggests that those who maintain genuine uncertainty rather than reflexive opposition to mainstream views are less susceptible.

Real-world social ties: Gentzkow and Shapiro's finding that online segregation is lower than face-to-face interaction suggests diverse real-world social networks may provide some protection against online echo effects.


Possible interventions and their limitations

Algorithmic interventions: Piccardi et al.'s 2025 Science study demonstrated that down-ranking hostile content decreased polarization—suggesting algorithmic changes could help. However, this conflicts with engagement-maximizing business models.

Friction interventions: Research shows prompting users to think about accuracy before sharing reduces misinformation spread. But McLoughlin et al.'s finding that users share outrage-evoking content without reading suggests such interventions have limits when emotional arousal is high.

Media literacy: Some evidence for protective effects, but Kahan's research suggests higher information processing capacity can increase rather than decrease polarization by enabling more effective motivated reasoning.

Local news restoration: Research consistently shows local news attenuates nationalization of politics. Martin and McCrain found voters exposed to more local news are less likely to apply national partisan judgment to down-ballot races.

Regulatory approaches: Platform transparency requirements, algorithmic auditing, and potential advertising-based revenue model changes remain largely theoretical in the US context.

Deplatforming: Evidence is mixed. Father Coughlin's removal from radio in 1939 ended his influence; modern deplatforming sometimes drives users to less-moderated spaces.


Synthesis: Algorithms, business models, human psychology, or all three

The research evidence points unambiguously to all three operating synergistically.

Human psychology provides the substrate: identity-protective cognition, tribal epistemology, foot-in-the-door compliance, cognitive dissonance, sunk cost commitment, negative partisanship, and need for significance all evolved in small-group contexts where they may have been adaptive.

Business models create incentives: advertising-funded attention economies profit from engagement, and outrage generates engagement. Both cable news (30-55% profit margins) and social media optimize for emotional content because it captures attention.

Algorithms amplify at scale: machine-driven curation surfaces high-arousal, moral-emotional, out-group-hostile content because such content generates the engagement that algorithms are trained to optimize. This creates feedback loops impossible in pre-digital contexts.

The combination produces emergent dynamics: psychological vulnerabilities that might have modest effects in small groups become exploited at scale by algorithmic amplification driven by profit-maximizing business models.


Can liberal democracy survive an information ecosystem optimized for division?

This question cannot be definitively answered, but the research provides cause for concern.

Liberal democracy depends on certain epistemic conditions: citizens must share enough common factual ground to deliberate, must extend good faith to political opponents, must prize procedural legitimacy over tribal victory, and must update beliefs based on evidence. Every one of these conditions is undermined by the dynamics documented above.

Negative partisanship produces a politics where defeating the enemy matters more than policy outcomes. Identity-protective cognition means shared facts become impossible when factual claims become tribal markers. Audience capture and ideological entrepreneurship reward the loudest and least principled voices. The collapse of local news nationalizes and polarizes political information. Platform business models profit from division.

Yet the research also offers hope. Filter bubbles are smaller than feared; algorithmic effects on typical users are modest; most people remain moderate. The populations most affected by radicalization dynamics appear to be minorities—though minorities who may exercise disproportionate influence.

The question is whether democratic institutions can withstand minority radicalization driven by profit-maximizing information systems, or whether the feedback loops documented here will eventually capture majorities. The 1930s comparison is instructive: radio enabled but did not determine fascism's rise; institutional responses and countervailing forces mattered. What remains unclear is whether contemporary democracies possess the will to reform information ecosystems that profit powerful interests, or whether the feedback loops have already captured too much of the political system itself.

Topics

researchanalysis