FEED
POSTPOST
QUERY
Social media algorithms intentionally promote divisive content to increase engagement.
9/10 VERIFIED
9/10 STRONG EVIDENCE
BIAS: CENTER
🤖Tech
1. ANSWER — Credible sources and studies indicate that social media algorithms, designed to maximize user engagement (e.g., likes, shares, comments), systematically amplify divisive, angry, and polarizing content because it drives higher interactions. Companies like Meta and TikTok have internal research showing this effect and have chosen to allow more "borderline" harmful content to compete, knowing outrage boosts metrics.123

2. EVIDENCE
- A 2025 PNAS Nexus study (Milli et al., published March 5, 2025) audited Twitter's algorithm, finding it amplifies angry (0.47 SD increase), partisan (0.24 SD), and out-group hostile content (0.24 SD) relative to chronological feeds, inducing negative emotions in users despite lower stated preferences for such content.345
- BBC investigation (March 15, 2026) revealed Meta and TikTok whistleblowers confirming companies allowed more harmful content (e.g., misogyny, conspiracies) after research showed outrage drives engagement; Meta's Reels launch (2020) lacked safeguards, increasing bullying/hate by 75%/19%.1
- Frances Haugen (Facebook whistleblower, 2021) testified the 2018 algorithm shift to "meaningful interactions" incentivized "angry, polarizing, divisive content" as it outperformed others in engagement metrics.2
- No major conflicting evidence found; some platforms claim tweaks for positivity, but studies consistently show engagement optimization favors divisive material.

3. CRITICAL CONTEXT — People might believe this due to whistleblower leaks, peer-reviewed audits, and observable "ragebait" trends (Oxford Word of the Year 2025), where creators exploit algorithms for views. Skepticism is legitimate given algorithmic opacity—companies treat them as trade secrets—and potential tradeoffs: reducing divisiveness could lower engagement/time spent, hurting revenue. Institutions like Meta prioritize growth over full mitigation, creating transparency gaps; unanswered questions remain on exact weighting and post-2025 changes.

4. CREDIBILITY — 9

5. EVIDENCE STRENGTH — 9

6. BIAS — CENTER

7. CATEGORY — Technology & AI

SOURCES
1. bbc.com
2. cbsnews.com
3. pmc.ncbi.nlm.nih.gov
4. knightcolumbia.org
5. academic.oup.com
REACT
ANALYZED 4/12/2026, 10:11:21 PM — POWERED BY AI
← DASHBOARD
FULL FEED →
Truth Seeker: 9/10 VERIFIED | CENTER — unZapped