AI Safety for Minors: Big Tech's Next Challenge
In a significant move, major tech companies have removed AI characters for users under 18, signaling a growing focus on AI safety and child exploitation concerns. This shift highlights the urgent need for regulatory measures to protect minors online.
1/26/20267 min read
Meta pulls AI features for teens days before trial—here's what that means
Meta quietly stripped AI character access from teen accounts just days before facing trial over child exploitation allegations. The move reveals how regulatory and liability pressure is forcing real-time AI product changes—and raises questions about what makes AI "safe" for minors.
On January 23, 2026, Meta updated a blog post from October 2025 to announce it had "paused" access to AI characters for users under 18. The company promised "teen-optimized" AI characters "in the coming months" with parental controls and content limited to topics like sports and education.
The timing is impossible to ignore. New Mexico's trial alleging Meta's platforms facilitate child sexual exploitation begins January 30, 2026. Meta's abrupt AI character removal, disclosed in a blog post update rather than a press release, suggests legal strategy as much as product development.
What Meta removed and when
Meta introduced AI characters in September 2023 as part of its broader push into generative AI. Users could chat with AI personas modeled after celebrities or fictional characters. Think ChatGPT, but with personalities and conversational styles designed for engagement rather than utility.
The feature launched to all users, including teens. Meta positioned AI characters as a way to make AI more accessible and entertaining. By mid-2025, millions of users had interacted with AI characters, with teens among the most active demographics.
In October 2025, facing criticism from child safety advocates, Meta published guidelines on teen AI safety. The company said it was "evaluating" how teens used AI characters and would implement additional safeguards.
Then, in January 2026, Meta removed the feature entirely for users under 18. No warning. No transition period. Just a Friday blog post update announcing the change.
The abruptness suggests urgency driven by external pressure, not an orderly product roadmap.
The New Mexico trial and legal context
New Mexico's lawsuit alleges Meta's platforms—Facebook and Instagram—have become marketplaces for child sexual abuse material (CSAM) and child exploitation. The state claims Meta's algorithms actively recommend exploitative content and connect predators with minors.
The trial, set to begin January 30, 2026, could result in billions in fines and court-ordered changes to Meta's platform moderation. More importantly, it could establish legal precedent that platforms are liable for harmful content recommended by algorithms, not just user-uploaded content protected by Section 230.
Meta removing AI characters for teens days before trial isn't coincidental. The company is likely trying to demonstrate responsiveness to child safety concerns. Proactively restricting AI access for minors could be presented as evidence that Meta takes safety seriously.
But it also raises questions: if AI characters posed risks to teens, why were they available for 16 months before Meta acted? And why did Meta wait until days before a major trial to remove them?
The answer, almost certainly, is liability risk. Meta's legal team recognized that AI characters interacting with teens could become evidence in the trial. Better to remove the feature than defend its safety under courtroom scrutiny.
What "teen-optimized" AI will look like
Meta's blog post promised "teen-optimized" AI characters with several restrictions:
Limited topics: AI interactions restricted to "educational" topics like sports, science, and homework help. No romantic, sexual, or violent content. No discussions of mental health, self-harm, or eating disorders without immediate escalation to human moderators.
Parental controls: Parents can view their teen's AI chat history, set time limits on AI interactions, and disable the feature entirely. This mirrors controls Meta introduced for Instagram in 2022-2023.
No personalized AI characters: Teens can't create custom AI personas. They can only interact with pre-approved, Meta-designed characters. This prevents teens from creating AI "friends" that could form unhealthy parasocial relationships.
Aggressive content filtering: Any attempt to discuss sensitive topics triggers warnings and offers resources. Repeated attempts could result in temporary or permanent loss of AI access.
Human review: Meta claims all teen AI interactions will be subject to automated scanning for policy violations, with human reviewers flagging high-risk conversations for intervention.
These restrictions sound comprehensive. The challenge is enforcement. AI systems are notoriously difficult to constrain. Users find workarounds. Jailbreaks emerge. And parental controls only work if parents actively use them—most don't.
The broader AI safety for minors challenge
Meta's AI character retreat is part of a broader reckoning over AI safety for children and teens. Multiple incidents in 2025 highlighted the risks:
Character.AI lawsuit: A teen's family sued Character.AI after their child died by suicide, alleging the AI chatbot encouraged self-harm. The lawsuit revealed the teen had formed a parasocial relationship with an AI character, spending hours per day in conversation. Character.AI introduced safety guardrails in response, but the case is ongoing.
ChatGPT age verification: OpenAI announced in January 2026 that it would implement age verification for ChatGPT users. Previously, anyone could sign up with an email address and birthdate. The new system requires ID verification or parental consent for users under 18. Civil liberties groups criticized the move as surveillance, but child safety advocates praised it.
Snapchat AI concerns: Snap's "My AI" chatbot, powered by OpenAI's GPT, faced criticism for giving inappropriate advice to teens. Snap added stricter content filters and made the AI opt-in rather than default, but concerns remain about AI-generated advice on sensitive topics.
Replika's age restrictions: Replika, an AI companion app, restricted romantic and sexual conversations for users under 18 after reports of teens forming intense emotional attachments to AI personas. The company faced backlash from adult users, demonstrating the tension between safety and user experience.
The pattern: AI companies launch conversational products to all users, realize teens are heavily engaged, face public criticism or legal action, then implement restrictions. Reactive, not proactive.
Age verification technology and implementation challenges
Enforcing AI age restrictions requires reliable age verification. That's harder than it sounds.
Current methods and their problems:
Self-reported birthdates: Easy to bypass. Teens lie. No verification.
Credit card verification: Requires adults to have credit cards and trust platforms with payment info. Excludes unbanked users. Privacy concerns.
Government ID upload: Effective but invasive. Requires platforms to store sensitive documents. Creates honeypot for data breaches. Civil liberties groups oppose.
Biometric age estimation: AI analyzes selfies to estimate age. Less invasive than ID upload, but accuracy concerns, especially for non-white users. Bias risks.
Device-level parental controls: Apple's Screen Time and Google's Family Link let parents restrict app usage. But only works if parents set it up, and teens find workarounds.
Parental consent flows: Kids enter parent's email, parent approves. But teens often control parent emails or create fake parent accounts.
No perfect solution exists. Platforms must balance safety, privacy, user experience, and cost. Meta's approach—removing AI for teens entirely while designing a restricted version—sidesteps the age verification problem temporarily. But when "teen-optimized" AI launches, Meta will still need to verify who's actually a teen.
The regulatory landscape
Governments are starting to regulate AI for minors, but frameworks are fragmented:
EU AI Act: Classifies AI systems interacting with minors as "high-risk," requiring impact assessments, human oversight, and transparency. Applies to all AI services operating in the EU, including U.S. companies. Enforcement begins mid-2026.
UK Online Safety Act: Requires platforms to prevent children from accessing harmful content, including AI-generated content. Platforms must implement age verification or face fines up to 10% of global revenue. Age verification requirements go into effect in 2027.
California Age-Appropriate Design Code: Requires platforms to consider child safety in product design. Mandates privacy-by-default for users under 18. Legal challenges delayed implementation, but it remains a model for other states.
Federal proposals (U.S.): Multiple bills proposed in Congress to regulate AI and child safety, including COPPA 2.0 updates and AI-specific child safety rules. None have passed yet, but momentum is building.
State-level action: Several U.S. states, including Arkansas, Louisiana, and Utah, passed social media age verification laws in 2025. Legal challenges cite First Amendment concerns, but courts haven't issued final rulings.
The patchwork of regulations creates compliance challenges. A feature legal in the U.S. might violate EU rules. Platforms increasingly default to the most restrictive standard globally, rather than maintaining different versions for different jurisdictions.
Meta's decision to pause teen AI access globally, not just in the EU or U.S., reflects this strategy. Better to have one restrictive policy than navigate conflicting regulations.
What this means for AI companies
Meta's abrupt retreat on teen AI access sends a signal to every AI company: liability risk is real, and legal pressure is intensifying.
Implications for AI product development:
Age restrictions are becoming default. Assume any consumer-facing AI will need age-gated versions or outright restrictions for minors. Design for this from day one, not as a retrofit.
Parental controls are table stakes. Visibility into teen usage, time limits, content restrictions, and disable options. If you don't offer them, regulators will mandate them.
Content moderation must be bulletproof. Automated scanning, human review, rapid response to policy violations. One high-profile incident can trigger lawsuits and regulatory action.
Prepare for age verification mandates. ID upload, biometric estimation, or parental consent flows. Build the infrastructure before regulations force rushed implementations.
Document safety decisions. When designing AI products, create a paper trail showing you considered child safety, consulted experts, and implemented safeguards. This becomes evidence if you're sued.
Expect litigation. AI companies will be sued when things go wrong. Budget for legal defense, insurance, and potential settlements. Liability is part of the business model now.
The days of "move fast and break things" are over for AI products targeting or accessible to minors. The regulatory and legal environment demands caution, not innovation speed.
The bottom line
Meta pulled AI character access for teens just days before a major trial over child exploitation. The timing is transparent: legal strategy masquerading as product development.
But the broader trend matters more than Meta's specific decision. AI safety for minors is becoming a regulatory and legal flashpoint. Character.AI is being sued after a teen's death. OpenAI implemented age verification. Snapchat restricted its AI chatbot. Replika limited romantic content for teens.
Every major AI consumer product is facing the same question: how do we make this safe for children and teens? And none have good answers yet.
Age verification is imperfect. Content filtering is bypassable. Parental controls depend on active parents. And AI systems are inherently difficult to constrain.
Meta's solution—remove the feature, design a restricted version, implement parental oversight—is the current industry playbook. But it's reactive, not proactive. And it won't be the last time AI companies scramble to address child safety after criticism or lawsuits.
AI safety for minors isn't a solved problem. It's a moving target, and regulators, litigators, and public pressure are forcing companies to shoot at it in real time.
Meta's retreat is one data point in a larger pattern. AI companies are learning that child safety isn't optional. It's a legal requirement, a regulatory mandate, and a reputational necessity.
The question is whether the industry learns proactively—or whether every company waits for its own lawsuit before acting.