
Instagram Removes 135,000+ Accounts in Major Child Safety
Meta Acts: Over 135,000 Accounts Removed for Child Safety Violations
Meta, the parent company of Instagram and Facebook, has removed over 135,000 Instagram accounts in 2025 alone for violating child safety policies. These users left sexualized comments or solicited explicit images from accounts featuring children—most of which are adult-managed profiles for underage users.
An additional 500,000 accounts across Instagram and Facebook were removed for being linked to the offending accounts, marking one of the largest digital safety crackdowns in recent history.
???? Who Was Targeted—and Why It Matters
According to Meta, many of these profiles are family-run accounts or “kidfluencer” pages operated by parents, guardians, or talent managers. While often innocent, such accounts have become targets for predatory behavior.
“These accounts are overwhelmingly used in benign ways, but unfortunately, some users abuse them—leaving sexualized comments or requesting illicit images,” Meta stated.
The company shared offending profiles with the Tech Coalition’s Lantern project, a cross-platform initiative designed to detect and track online predators who operate across multiple sites.
???? Key Safety Enhancements Rolled Out
Instagram has implemented several new safety features that are now live by default for child-focused and teen accounts:
✅ For Adult-Managed Accounts Featuring Children:
- Automatically assigned strictest DM settings.
- “Hidden Words” filter enabled by default to block offensive comments.
- Stricter search discovery controls to prevent predators from finding these profiles.
- Accounts won’t be recommended to users blocked by teens.
✅ For Teen Accounts:
- Visible in-app safety tips during DMs.
- Combined block-and-report options for easier protection.
- Account creation date now visible for added transparency.
- In June 2025 alone, teens blocked over 1 million accounts and reported another million after seeing these alerts.
⚖️ Under Pressure: Why Meta’s Cracking Down Now
This move follows increasing regulatory and political scrutiny over how social platforms protect minors. The U.S. Surgeon General and several state governments have raised red flags over teen exposure to exploitation, cyberbullying, and harmful content.
Many states are pushing legislation that would require parental consent for users under 16 or 18 to join social platforms.
Meta says its nudity protection system is functioning on 99% of accounts, and notably, 40% of blurred images sent in DMs remain unopened, indicating strong user compliance.
???? Final Thoughts: A Wake-Up Call for Creators & Parents
This isn’t just a policy update—it’s a warning. If you’re a parent managing a child’s Instagram, a family content creator, or a brand working with young influencers, these safety protocols aren’t optional anymore.
Meta’s massive sweep is both a technical and moral response to a rising digital threat. With stricter privacy tools, proactive detection systems, and regulatory heat, 2025 could mark a turning point in how platforms protect their youngest users.
✍️ Author Bio
Anish Khan is a digital safety writer and founder of TechBoltX, where he reports on AI, tech policy, and cybersecurity trends shaping the future of online trust.


