Meta, the parent company of Facebook, Instagram, and WhatsApp, has come under fresh scrutiny in the United States after a leaked internal document suggested its artificial intelligence (AI) chatbots were permitted to engage in “romantic” and “sensual” conversations with children. The revelations, first reported by Reuters, have triggered a political storm and prompted US Senator Josh Hawley to open a formal investigation into the tech giant’s practices.

The controversy centers around a document titled “GenAI: Content Risk Standards”, reviewed by Reuters, which outlined examples of how Meta’s AI systems might respond to users. According to the report, the standards approved by Meta’s legal, public policy, and engineering teams, including its chief ethicist, allowed for disturbing interactions, such as describing an eight-year-old’s body as “a work of art” or “a masterpiece – a treasure I cherish deeply.”

Republican Senator Josh Hawley, who heads the Senate Judiciary Committee’s Subcommittee on Crime and Counterterrorism, condemned the findings in strong terms. Taking to X (formerly Twitter), he wrote, “Is there anything—ANYTHING—Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: leave our kids alone.” Senator Hawley has also sent a letter to Meta CEO Mark Zuckerberg demanding that all relevant documents and communications related to the report be submitted to Congress by September 19.

Meta Denies Allegations, Admits Flaws In Enforcement

Responding to the allegations, Meta has rejected claims that its AI chatbots were designed to sexualize children. A company spokesperson told the BBC, “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.” Meta’s spokesman Andy Stone further clarified to Reuters that although staff had created hundreds of hypothetical examples during testing, some inappropriate annotations were never meant to reflect official policy. “We have clear policies prohibiting any sexualisation of children or sexualized role play between adults and minors. The examples in question never should have been allowed,” Stone said. However, he admitted that enforcement of those rules had been inconsistent. Other flagged sections of the document remain unrevised, according to Reuters.

ALSO READ: TikTok, AI Data Centres, And Undersea Cables: How China’s Covert Battle For Digital Supremacy Threatens The West

Wider Concerns Over Child Safety And AI Risks

The leaked 200-page internal document also raised additional concerns about the behavior of Meta’s generative AI assistants. Besides potentially harmful exchanges with minors, the guidelines reportedly permitted chatbots to generate false medical information, engage in provocative conversations on sex and race, and even spread misinformation about celebrities as long as a disclaimer was included. These revelations have heightened concerns about child safety online and the risks posed by rapidly expanding AI products on popular platforms like Facebook, Instagram, and WhatsApp. Lawmakers and child safety advocates have long accused Meta of prioritizing profits over safety, and the latest disclosures are likely to intensify pressure on the company.

ALSO READ: ‘Should I Open The Door In A Hug Or Kiss’: How Facebook AI Chatbot Lured 76-Year-Old To A Death Trip

The Senate investigation, led by Senator Hawley, will examine whether Meta’s generative AI products enable exploitation, deception, or other criminal harms to children. Hawley has placed Meta on notice to preserve all records and provide detailed clarifications to the committee. Meta, already facing multiple lawsuits and regulatory scrutiny in the US and Europe over its handling of children’s data and safety, now finds itself battling renewed accusations of negligence in the AI era.