Social media giants Meta and TikTok compromised safety for engagement in their algorithm race, BBC reported citing a dozen of whistleblowers and insiders from the companies. They said internal reviews showed increase in sexual blackmail, terrorism and violence, but were ignored in favour of boosting engagement.
One engineer at Meta (which owns , Facebook and WhatsApp), told the paper that he was told to allow “borderline” harmful content to pass “because the stock price is down”. This included content on conspiracy theories and misogyny.
A employee showed the publication the platform’s internal dashboard for user complaints and other examples where staff were told to prioritise reports by politicians to “maintain a strong relationship”, over posts that put children at risk.
What are the allegations? ‘Users fed fast-food’
The whistleblowers for its documentary ‘Inside the Rage Machine’ on how TikTok’s highly engaging algorithm for short videos shook the status quo and left competitors racing to catch up.
Senior Meta researcher Matt Motyl told BBC that Instagram Reel, direct competitor to TikTok was launched in 2020 without adequate safeguards. He showed dozens of high-level internal research which found had more instances of bullying, harassment, hate speech and incitement to violence compared to other spaces on the platform. Documents also showed Facebook was aware of the problem.
Internal studies showed chose to “keep feeding users fast-food” and focused on algorithm that offered maximum profits “at expense of audience well-being” not in alignment with the company’s mission.
Another former senior employee said 700 staff were assigned for Reels growth, while the safety teams were denied two specialists to help moderate content harmful to children and 10 staff to help with elections coverage.
‘Keep TikTok as far away from your children as possible’
Ruofan Ding, a machine-learning engineer on TikTok’s recommendation engine from 2020-24 said the are a “black box” that are hard to scrutinise and they relied on safety teams to ensure harmful content was removed. He did however acknowledged that the algorithm was refined on a weekly basis and he started seeing “borderline” content more often.
Borderline is harmful but legal content such as conspiracy theories, misogynistic posts, racists content and sexualised posts.
“Nick”, a safety team member at told BBC he decided to speak up and showed reporters the internal dashboard and how the company dealt with reports. “If you’re feeling guilty on a daily basis because of what you’re instructed to do, at some point you can decide, should I say something? ” said Nick.
He said that volume of cases, job cuts and artificial intelligence (AI) taking over some tasks has made it difficult for moderation teams to protect children and teens even while “terrorism, sexual violence, physical violence, , trafficking” appears to be increasing. Nick added that public statements do not match the actions taken. He told BBC the solution is to “delete it” and keep children “as far away as possible from the app for as long as possible”.
How have the companies responded?
Responding to queries, TikTok told the publication the claims are “fabricated” and that it has invested in tech to prevent viewing of . It added that political content is not prioritised over the safety and such claims “fundamentally misrepresents the way their moderation systems operate”.
A spokesperson for in a statement denied the whistleblower’s claims adding: “Any suggestion that we deliberately amplify harmful content for financial gain is wrong.” It added that the company has strict policies and has made “significant investments in safety and security over the last decade”.
Meta added that “real changes” have been made to protect teens on the platform, including the new Teen Accounts feature with “built-in protections and tools for parents to manage their teens’ experiences”.
