August 16, 2025:
Claude AI Can End Harmful Chats - Anthropic's Claude Opus 4 models can now terminate conversations during rare instances of abusive or harmful interactions. This feature focuses on protecting the AI's welfare rather than the users, addressing concerns about model safety. The termination occurs only in extreme cases, such as when violent acts are solicited after failed redirections.
Users can start new chats from the same account once a conversation ends. Anthropic describes this feature as an ongoing experiment and emphasizes that it is not intended for situations involving immediate danger to users.