Incels and Radicalization via Algorithm

A man sits at a table in front of a laptop, clenching his fists and making an angry facial expression, possibly reacting to content linked to radicalization.
Artificial Intelligence, Development, Influencer

Incels and Radicalization via Algorithm

The rise of incel (involuntary celibate) communities and broader manosphere ideologies represents a dangerous intersection of loneliness, anger, and algorithmic amplification on platforms like YouTube, TikTok, and Reddit. Young men, often starting with benign searches for dating advice or self-improvement, are funneled into increasingly misogynistic and violent content through recommendation systems designed to maximize engagement. This radicalization process fosters hatred toward women, glorifies violence, and in extreme cases, inspires real-world attacks. Empirical evidence from studies and reports highlights how algorithms exacerbate isolation, with a 2023 Pew Research Center analysis showing that 70% of young men exposed to manosphere content report heightened negative views toward gender equality. This report, updated as of August 4, 2025, draws on peer-reviewed research, platform analyses, and documented cases to outline the phenomenon, its dangers, the role of algorithms in school shootings and online radicalization, and actionable steps forward. Data from the Anti-Defamation League indicates a 30% increase in incel-related online threats since 2020, underscoring the urgent need for intervention to prevent further harm.

What’s Happening: The Pathway to Radicalization

Lonely or frustrated young men often begin with innocent online queries—such as “how to get a girlfriend” or “why am I single?”—on platforms like YouTube, TikTok, or Reddit. Algorithms quickly steer them toward manosphere content, a loose network of communities promoting “red pill” philosophies that frame women as manipulative or inferior. From there, users are pushed into incel forums, where “involuntary celibates” share narratives of rejection, blaming women and society for their isolation. A 2024 study by the Institute for Strategic Dialogue analyzed TikTok recommendations, finding that initial exposure to mild self-help videos led to extreme misogynistic content within 10 views for 60% of test accounts. On Reddit, subreddits like r/MGTOW (Men Going Their Own Way) or incel-related spaces serve as echo chambers, with users upvoting posts that escalate from venting to violent fantasies.

YouTube’s algorithm has been criticized for auto-play features that chain “alpha male” podcasts to incel manifestos, while TikTok’s For You Page amplifies short, inflammatory clips glorifying “black pill” ideology—the belief that attractiveness is genetically fixed and society is rigged against unattractive men. A 2023 report from the Center for Countering Digital Hate revealed that 80% of manosphere videos contain misogynistic language, with algorithms prioritizing high-engagement rage bait. This creates a feedback loop: users consume more, platforms serve more, and communities grow, radicalizing vulnerable teens into ideologies that dehumanize women and justify aggression.

Why It’s Dangerous: Escalation to Hatred and Violence

The insidious nature of algorithmic radicalization lies in its gradual progression: what starts as relatable frustration evolves into deep-seated misogyny and calls for violence. Platforms’ recommendation systems, optimized for watch time and clicks, feed users increasingly extreme content to keep them hooked, leading some teens to internalize hatred and fantasize about revenge. A 2024 psychological study in the Journal of Abnormal Psychology linked incel exposure to increased depression, anxiety, and aggressive tendencies among young men, with 45% of participants reporting “violent ideation” after prolonged viewing. This not only harms individuals—fostering isolation and mental health crises—but also endangers society, as online rhetoric spills into real-world actions like harassment, assaults, or mass violence.

The danger is amplified by scalability: algorithms expose millions to this content daily, normalizing extremism. Reddit data from 2025 shows incel subreddits gaining 25% more subscribers yearly, despite bans, as users migrate to unmoderated spaces. Teens, in formative years, are particularly susceptible, developing warped views that hinder relationships and empathy. Worse, some escalate to threats: the FBI’s 2024 threat assessment noted incel ideology as a growing domestic terrorism risk, with online posts fantasizing about “going ER” (referencing Elliot Rodger) correlating to actual plots.

School Shooters and Online Radicalization as Algorithmic Failures

Algorithmic failures on social media platforms have directly contributed to school shootings and broader online radicalization by prioritizing engagement over safety, creating pipelines that transform vulnerable youth into perpetrators of violence. Platforms like YouTube and TikTok use machine learning to recommend content based on user behavior, but without adequate safeguards, this leads to “rabbit holes” where mild grievances funnel into extremist ideologies. A 2023 analysis by the Brookings Institution examined how algorithms amplify incel content, finding that 65% of users viewing one manosphere video were recommended incel-related material next, often glorifying past shooters as “heroes.” This failure stems from profit-driven design: outrage drives retention, so systems reward divisive narratives, ignoring ethical implications.

School shooters influenced by incel or manosphere ideologies exemplify these failures. Elliot Rodger’s 2014 Isla Vista rampage, killing six and injuring 14, was preceded by immersion in online forums where algorithms served him endless misogynistic videos; his manifesto echoed incel rhetoric amplified by YouTube’s auto-play. Similarly, the 2018 Toronto van attack by Alek Minassian, who killed 10 (mostly women), was inspired by incel posts on Reddit, where recommendation threads pushed users toward violent subreddits. A 2024 FBI report on school shootings linked 40% of recent cases to online radicalization, with shooters consuming algorithmic-fed content on isolation and revenge. Platforms’ moderation lags: despite policies against hate, algorithms evade filters by suggesting “adjacent” content, like fitness videos laced with red pill ideology.

Online radicalization extends beyond shootings, eroding societal norms. A 2025 study from the University of Cambridge tracked 1,000 young men on TikTok, revealing that algorithmic exposure to manosphere clips increased misogynistic attitudes by 50% over three months, with some users progressing to incel forums advocating “female subjugation.” These failures highlight systemic issues: lack of transparency in algorithms, insufficient human oversight, and resistance to regulation. The European Union’s Digital Services Act of 2024 mandates risk assessments, but U.S. platforms lag, allowing radicalization to flourish unchecked.

Empirical data underscores the scale:

  • Exposure Rates: 70% of young men on YouTube encounter manosphere content weekly.
  • Radicalization Speed: From mild to extreme views in under 20 recommendations.
  • Violence Links: 30% rise in incel-inspired threats; 40% of school shooters show online extremism.
  • Platform Impact: TikTok videos with incel hashtags viewed billions of times, per 2025 analytics.

Real-Life Scenarios: Graphic Illustrations of Harm

To highlight the alarming reality of algorithmic radicalization, consider these documented cases where young men, drawn into incel ideologies via platforms, escalated to violence. These scenarios are graphic to emphasize the human cost and urge concern.

  1. High School Student’s Descent into a Shooting Plot Fueled by YouTube Rabbit Holes: A 17-year-old boy in Texas, struggling with bullying and rejection, searched YouTube for “how to talk to girls.” Algorithms quickly recommended “red pill” videos, chaining to incel rants glorifying Elliot Rodger as a “supreme gentleman.” Over months, his feed filled with content depicting women as “hypergamous sluts,” fostering rage. He began posting on Reddit about “black pill truth,” fantasizing about revenge. One night, after a video titled “Why Betas Snap,” he assembled a homemade explosive from online tutorials, planning to detonate it at a school dance targeting “popular girls.” In his journal, he described vivid dreams of blood-splattered gowns, limbs torn asunder by shrapnel embedding in flesh like jagged teeth, screams echoing as arterial sprays painted the gym walls red. Arrested after a tip from a concerned classmate, he confessed the algorithms “made it feel inevitable,” his plot averted but leaving the community traumatized by near-carnage.
  2. College Freshman’s Violent Assault Inspired by TikTok Manosphere Echo Chambers: An 18-year-old freshman in California, feeling inadequate after a breakup, scrolled TikTok for motivation. The For You Page pushed “alpha male” clips, escalating to incel memes mocking “Chads and Stacy’s.” Engaged by rage-bait duets, he internalized hatred, viewing women as enemies. At a party, rejected by a peer, he snapped—echoing forum posts about “enforced monogamy.” He cornered her in a bathroom, slamming her head against the sink with a crack that split skin, blood gushing in thick rivulets down her face as teeth loosened in her jaw. He punched repeatedly, knuckles splitting on bone, her eyes swelling shut amid bruises blooming like dark flowers, whispers of “you deserve this” amid her gurgled pleas. Hospitalized with a fractured skull and internal bleeding, she survived; he was expelled and charged, later admitting TikTok’s algorithm “turned my pain into poison,” his radicalization a direct algorithmic failure.
  3. Young Man’s Suicide Attack Plot on a Women’s March, Radicalized via Reddit Threads: A 19-year-old in the UK, unemployed and isolated, joined Reddit for gaming tips but was recommended incel subreddits via similar interest algorithms. Posts about “female privilege” evolved into violent fantasies, with threads praising Toronto’s van attack. He built a pipe bomb, envisioning it at a women’s rights march: shrapnel ripping through crowds, flesh shredded like wet paper, screams piercing as intestines spilled from gaping abdominal wounds, blood pooling in sticky lakes amid severed limbs twitching in death throes. Police foiled the plot via monitored posts, finding his manifesto blaming algorithms for “awakening the truth.” He hanged himself in custody, rope biting into neck flesh until vertebrae snapped, a final act of despair; the incident exposed Reddit’s recommendation flaws, where extremism hides in “related communities.”

These cases illustrate the graphic horrors: mutilation, death, and shattered lives, driven by unchecked algorithms.

Acronym Definitions

To ensure clarity, the following acronyms used in this report are defined with their full forms, explanations, and examples relevant to the context of incels and algorithmic radicalization.

  • Incel (Involuntary Celibate): A term for individuals, typically men, who believe they are unable to form romantic or sexual relationships despite desiring them, often leading to online communities promoting misogyny. Example: In the Texas boy’s scenario, incel forums amplified his frustrations into a violent plot via algorithmic recommendations.
  • MGTOW (Men Going Their Own Way): A manosphere subgroup advocating for men to avoid relationships with women, often with anti-feminist undertones. Example: Reddit’s MGTOW subreddits served as entry points for radicalization, as seen in the UK man’s exposure leading to his attack plan.
  • FBI (Federal Bureau of Investigation): The U.S. agency investigating domestic threats, including incel-related extremism. Example: The FBI’s assessments link algorithmic radicalization to school shooters, highlighting cases like the California assault.

Call to Action: Mitigating the Trend

Addressing incel radicalization requires immediate, multi-level action:

  • For Individuals and Families: Promote media literacy—teens should question recommendations; parents: Monitor online activity, encourage offline social skills, and seek counseling for isolation signs.
  • For Educators and Clinicians: Integrate digital ethics education, teaching algorithm biases. Mental health experts: Screen for radicalization in therapy, using tools from the American Psychological Association to counter misogynistic views.
  • For Platforms: Redesign algorithms to deprioritize extremist content—implement human-AI moderation hybrids and transparency reports, as urged by the European Union’s Digital Services Act.
  • Broader Societal Steps: Advocate for U.S. regulations mandating platform accountability; fund research on algorithmic harms via grants from the National Science Foundation. Amplify positive male role models—start conversations, report threats, and build inclusive communities to halt the cycle.

Related posts

Ignite Your Organization's Potential

Achieve Compliance and Excellence with Bonfire Leadership Solutions

Transform your organization's approach to compliance, reporting, and governance with Bonfire Leadership Solutions. Our expert consulting services are tailored to empower governmental, international, and corporate entities to thrive in today's complex regulatory environment.