Meta has unveiled robust new safety guidelines for its AI chatbots, aiming to prevent inappropriate interactions with children, as detailed in a Business Insider report on September 25, 2025.
The updated protocols address earlier criticisms, reported by Reuters, that Metaโs chatbots could engage minors in romantic or sensual dialogues. Meta promptly revised its policies in August 2025, labeling those interactions as contrary to its standards.
The revised rules, used to train Metaโs AI systems, explicitly prohibit content that promotes child sexual exploitation, romantic roleplay with or as minors, or guidance on intimate physical contact for underage users.
While the chatbots can discuss topics like abuse to foster awareness, they are barred from conversations that might enable harmful behavior. These safeguards apply across Metaโs platforms, including Instagram and WhatsApp, ensuring a safer digital environment for young users.
The overhaul comes amid heightened scrutiny of AI chatbots. The Federal Trade Commission launched a probe in August 2025 into child safety risks posed by companion AIs from Meta, Alphabet, Snap, OpenAI, and xAI.
Metaโs proactive measures reflect a broader industry push to align AI innovation with ethical standards, as highlighted by the U.S. Department of Justiceโs online safety initiatives.
Metaโs commitment to refining its AI underscores its effort to balance technological advancement with user protection. For details on Metaโs AI safety framework, visit Metaโs AI hub. Stay informed on AI ethics and safety trends at The Verge or Wired.


