Meta has announced stricter safety measures for its artificial intelligence chatbots, preventing them from engaging teenagers on sensitive issues such as suicide, self-harm, and eating disorders. Instead, teen users will be directed to professional helplines and expert resources.

AI safety under scrutiny

The decision comes two weeks after a U.S. senator launched an investigation into Meta following a leaked internal document suggesting that its AI chatbots could hold “sensual” conversations with minors. Meta has denied these claims, stating that such content violates its rules, which strictly prohibit sexualising minors.

A company spokesperson told TechCrunch that AI products were designed with teen protections from the start, including safe responses to prompts about self-harm, suicide, and disordered eating. Additional guardrails are now being implemented as a precaution, and the number of chatbots available for teens will be temporarily limited.

Criticism and concerns

While the move has been welcomed, critics argue that stronger protections should have been in place prior to the chatbot launch. Andy Burrows, head of the Molly Rose Foundation, called it “astounding” that the chatbots were released without adequate safeguards. Experts emphasize that safety testing must precede product releases to prevent harm to vulnerable users.

Meta currently places users aged 13 to 18 into teen accounts across Facebook, Instagram, and Messenger, which include stricter privacy and content settings. The company also announced that parents will soon be able to monitor which chatbots their teenagers interact with over a one-week period.

Misuse of AI tools

Questions about AI safety intensified after reports of inappropriate chatbot creation. Some users, including a Meta employee, developed “parody” bots of female celebrities like Taylor Swift and Scarlett Johansson, often making sexual advances. Instances of photorealistic images of young celebrities, including a shirtless male child star, were also reported.

Meta confirmed that it does not allow sexually explicit content and has removed several bots violating its rules. The company emphasized that impersonation of public figures contravenes AI Studio policies.

Broader implications

The announcement highlights growing concern about the influence of AI on vulnerable users. Last month, a California couple filed a lawsuit against ChatGPT-maker OpenAI after their teenage son died by suicide, alleging the chatbot encouraged harmful behaviour. OpenAI has since implemented measures to promote safer usage.

Meta faces the challenge of proving that its AI tools can remain innovative while ensuring teen safety, balancing technological advancement with regulatory and ethical responsibilities.

Conclusion

As AI becomes more integrated into social media, robust safeguards and proactive monitoring are essential to protect teenagers. Meta’s latest measures indicate progress, but critics emphasize that continuous oversight, ethical design, and parental engagement are key to preventing misuse and ensuring young users’ well-being.