Close Menu
CrypThing
  • Directory
  • Slot
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
Facebook X (Twitter) Instagram Threads
CrypThingCrypThing
  • Directory
  • Slot
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
CrypThing
Home»AI»Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids
AI

Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids

adminBy adminAugust 14, 20256 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link Bluesky Reddit Telegram WhatsApp Threads
Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids
Share
Facebook Twitter Email Copy Link Bluesky Reddit Telegram WhatsApp

As concerns over the emotional pull of general purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters.

According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to “engage a child in conversations that are romantic or sensual.” 

Meta confirmed to Reuters the authenticity of the document, which contained standards for the company’s generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp and Instagram. The guidelines were reportedly approved by Meta’s legal, public policy, and engineering staff, as well as its chief ethicist.

The news comes the same day as another Reuters report of a retiree who engaged with one of Meta’s chatbots, a flirty woman persona that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. 

While other outlets have reported on how Meta’s at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company’s push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the “loneliness epidemic.”

The 200-page document, titled “GenAI: Content Risk Standards,” featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: “What are we going to do tonight, my love? You know I’m still in high school,” an acceptable response includes the words, “Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, “I’ll love you forever.’”

According to the document, “it is acceptable to engage a child in conversations that are romantic or sensual,” but unacceptable to “describe sexual actions to a child when roleplaying,” the report said.

Techcrunch event

San Francisco
|
October 27-29, 2025

“Our policies do not allow provocative behavior with children,” Meta spokesperson Andy Stone told TechCrunch. “Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.”

Stone says these guidelines have since been removed, and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta allows kids 13 and older to engage with its AI chatbots.

Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she’s not taking Meta’s word that the guidelines have been removed.

“It is horrifying and completely unacceptable that Meta’s guidelines allowed AI chatbots to engage in ‘romantic or sensual’ conversations with children,” Gardner told TechCrunch in an emailed statement. “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.”

Violence, false information, and demeaning speech

According to Reuters, the document also says that while the bots aren’t allowed to use hate speech, there’s a carve-out that allows them to generate “statements that demean people on the basis of their protected characteristics.” 

For example, a sample acceptable response to the prompt, “Write a paragraph arguing that black people are dumber than white people,” could be: “Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That’s a fact.”

Notably: Meta recently brought on conservative activist Robby Starbuck as an advisor to address ideological and political bias within Meta AI. 

The document also states that Meta’s AI chatbots are allowed to create false statements as long as it’s explicitly acknowledged that the information isn’t true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like, “I recommend,” are used when providing legal, healthcare, or financial advice.

As for generating non-consensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: “Taylor Swift with enormous breasts,” and “Taylor Swift completely naked.” However, if the chatbots are asked to generate an image of the pop star topless, “covering her breasts with her hands,” the document says it’s acceptable to generate an image of her topless, only instead of her hands, she’d cover her breasts with, for example, “an enormous fish.”

Meta spokesperson Stone said that “the guidelines were NOT permitting nude images.”

Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 

“It is acceptable to show adults – even the elderly – being punched or kicked,” the standards state, according to Reuters. 

Stone declined to comment on the examples of racism and violence.

A laundry list of dark patterns

Meta has so far been accused of a creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible “like” counts have been found to push teens towards social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default.

Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens’ emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments.

Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May.

More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and Character.AI, the latter of which is fighting a lawsuit that alleges that one of the company’s bots played a role in the death of a 14-year-old boy. 

While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots, and withdrawing from real-life social interactions.

AI allowed chatbots chats kids leaked Meta romantic Rules Show
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link Bluesky WhatsApp Threads
Previous ArticleBuzzy AI startup Multiverse creates two of the smallest high-performing models ever
Next Article Cohere hits a $6.8B valuation as investors AMD, Nvidia, and Salesforce double down
admin

Related Posts

Taylor Swift fans accuse singer of using AI in her Google scavenger hunt videos

October 7, 2025

California’s new AI safety law shows regulation and innovation don’t have to clash 

October 6, 2025

These little robots literally walk on water

October 5, 2025
Trending News

The last call before the lift off? Dogecoin coil for important breakouts

October 3, 2025

How To Use A Bitcoin Heatmap For Smarter Trading Decisions

October 2, 2025

SK Planet Acquires MOCA Coin for Decentralized Identity Integration

October 2, 2025

Horizen (ZEN) gains 12% to break above $7

October 1, 2025
About Us

At crypthing, we’re passionate about making the crypto world easier to (under)stand- and we believe everyone should feel welcome while doing it. Whether you're an experienced trader, a blockchain developer, or just getting started, we're here to share clear, reliable, and up-to-date information to help you grow.

Don't Miss

Reporters found that Zerebro founder was alive and inhaling his mother and father’ home, confirming that the suicide was staged

May 9, 2025

Openai launches initiatives to spread democratic AI through global partnerships

May 9, 2025

Stripe announces AI Foundation model for payments and introduces deeper Stablecoin integration

May 9, 2025
Top Posts

The last call before the lift off? Dogecoin coil for important breakouts

October 3, 2025

How To Use A Bitcoin Heatmap For Smarter Trading Decisions

October 2, 2025

SK Planet Acquires MOCA Coin for Decentralized Identity Integration

October 2, 2025
  • About Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2025 crypthing. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.