Anthropic is rolling out the ability to end conversations in some of its artificial intelligence (AI) models. Announced last week, the new feature is designed as a protective measure for not the end user, but for the AI model itself. The San Francisco-based AI firm said the new capability was developed as part of its work on “potential AI welfare,” as conversations on certain topics can distress the Claude models.
Source: gadgets360-tech