Close Menu
CrypThing
  • Directory
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
Facebook X (Twitter) Instagram Threads
CrypThingCrypThing
  • Directory
  • News
    • AI
    • Press Release
    • Altcoins
    • Memecoins
  • Analysis
  • Price Watch
  • Price Prediction
CrypThing
Home»AI»Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
AI

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 

adminBy adminAugust 16, 20252 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link Bluesky Reddit Telegram WhatsApp Threads
Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
Share
Facebook Twitter Email Copy Link Bluesky Reddit Telegram WhatsApp

Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.

To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

However, its announcement points to a recent program created to study what it calls “model welfare” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”

While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.

As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”

Anthropic also says Claude has been “directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.”

Techcrunch event

San Francisco
|
October 27-29, 2025

When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses.

“We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.

abusive AI Anthropic Claude conversations harmful models
Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link Bluesky WhatsApp Threads
Previous ArticleOpenAI’s Altman confirms plans to back a startup that would rival Musk’s Neuralink
Next Article Litecoin (LTC) Consolidates After 11% Rally as Technical Indicators Show Mixed Signals
admin

Related Posts

This simple change stops robot swarms from getting stuck

April 15, 2026

How vibe coding app Anything is rebuilding after getting booted from the App Store twice

April 14, 2026

“Giant superatoms” could finally solve quantum computing’s biggest problem

April 13, 2026
Trending News

GitHub Shifts Copilot Data Policy to Train AI on User Code by Default

March 25, 2026

NVIDIA cuTile Python Guide Shows 90% cuBLAS Performance for Matrix Ops

January 15, 2026

NVIDIA Unveils Nemotron Nano 2 9B for Enhanced Edge AI Performance

August 20, 2025

CFTC Names Key Innovation Task Force Team Focusing on Crypto, AI and Prediction Markets – Regulation Bitcoin News

April 10, 2026
About Us

At crypthing, we’re passionate about making the crypto world easier to (under)stand- and we believe everyone should feel welcome while doing it. Whether you're an experienced trader, a blockchain developer, or just getting started, we're here to share clear, reliable, and up-to-date information to help you grow.

Don't Miss

Reporters found that Zerebro founder was alive and inhaling his mother and father’ home, confirming that the suicide was staged

May 9, 2025

Openai launches initiatives to spread democratic AI through global partnerships

May 9, 2025

Stripe announces AI Foundation model for payments and introduces deeper Stablecoin integration

May 9, 2025
Top Posts

GitHub Shifts Copilot Data Policy to Train AI on User Code by Default

March 25, 2026

NVIDIA cuTile Python Guide Shows 90% cuBLAS Performance for Matrix Ops

January 15, 2026

NVIDIA Unveils Nemotron Nano 2 9B for Enhanced Edge AI Performance

August 20, 2025
  • About Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
© 2026 crypthing. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.