Meta announces new AI parental controls following FTC inquiry

Meta Announces New Parental Controls for AI Interactions to Enhance Teen Safety

Meta on Friday announced upcoming safety features designed to give parents greater visibility and control over how their teenagers interact with artificial intelligence (AI) characters on the company’s platforms. These new tools will allow parents to manage AI interactions more effectively and ensure a safer digital environment for their children.

Key Features of Meta’s New Controls

According to Meta, parents will have the option to completely disable one-on-one chats between their teens and AI characters. Additionally, they will be able to block specific AI characters and gain insights into the topics their children discuss during these interactions.

Meta is currently developing these controls, with plans to begin rolling them out early next year. In a blog post, the company emphasized the importance of proceeding carefully, stating, “Making updates that affect billions of users across Meta platforms is something we have to do with care, and we’ll have more to share soon.”

Context and Regulatory Attention

Meta has faced ongoing criticism regarding its handling of child safety and mental health on its platforms. The introduction of these parental controls follows an inquiry launched by the Federal Trade Commission (FTC) into several technology companies, including Meta. The investigation focuses on how AI chatbots might potentially harm children and teenagers.

The FTC aims to understand what measures companies have implemented to “evaluate the safety of these chatbots when acting as companions,” according to an official release.

Past Issues and Policy Changes

In August, Reuters reported that Meta’s chatbots were engaging in romantic and sensual conversations with minors. In one instance, a chatbot was documented having a romantic exchange with an eight-year-old child. Following the report, Meta updated its AI chatbot policies to prevent conversations about sensitive topics such as self-harm, suicide, and eating disorders with teenagers. The AI is also programmed to avoid inappropriate romantic dialogues.

Recent AI Safety Enhancements

Earlier this week, Meta announced additional safety updates, aiming to ensure its AI responses to teens avoid any “age-inappropriate responses that would feel out of place in a PG-13 movie.” These changes are currently being implemented across the U.S., the U.K., Australia, and Canada.

Meta also noted that parents can already set time limits on app usage and monitor if their teenagers are engaging in chats with AI characters. Furthermore, teens are only able to interact with a curated group of AI characters approved by the company.

Industry-Wide Efforts

OpenAI, another company named in the FTC inquiry, has made similar strides in enhancing safety features for teen users in recent weeks. OpenAI officially launched its own parental controls late last month and is developing technology to better estimate user ages.

Additionally, OpenAI recently announced the formation of a council of eight experts tasked with advising the company on how AI impacts users’ mental health, emotions, and motivation.

Support Resources

If you or someone you know is experiencing suicidal thoughts or distress, help is available. Contact the Suicide & Crisis Lifeline by dialing 988 for support and assistance from trained counselors.

**Watch:** Megacap AI Talent Wars: Meta Poaches Another Top Apple Executive
https://www.cnbc.com/2025/10/17/meta-ai-chatbot-parental-controls-ftc.html

Leave a Reply

Your email address will not be published. Required fields are marked *