Meta’s announcement of a new PG-13 style safety system for Instagram has been met with significant skepticism from child safety advocates. While the company is promoting it as a major step forward, critics point to a history of ineffective tools and demand more than just a PR announcement.
The proposed system will automatically place all users under 18 into a more restrictive “13+” content setting. This setting will filter out profanity, risky stunts, and other sensitive material. Teens will need their parents’ permission to opt out.
However, this move comes just after an independent report, involving a former Meta whistleblower, found that 64% of Instagram’s new safety tools were ineffective. The report’s conclusion was stark: “Kids are not safe on Instagram.” This context is fueling the current wave of doubt.
Rowan Ferguson of the Molly Rose Foundation voiced this skepticism, stating, “Time and again Meta’s PR announcements do not result in meaningful safety updates for teens.” The foundation and other critics are calling for transparency and the ability for independent researchers to test the new features to verify their effectiveness.
Despite the criticism, Meta is moving forward with the rollout, starting in the US, UK, Canada, and Australia. The core conflict remains: Meta claims it has robust tools, while its critics demand independent proof that they actually work to protect children.
Skepticism Greets Meta’s Plan for PG-13 Instagram Experience for Teens
48