Anthropic Admits Product Changes Caused Claude Degradation
For weeks, developers and power users reported Claude suffering from AI shrinkflation, describing reduced reasoning ability, more hallucinations, and wasteful token usage. Critics noted a shift from a research-first to an edit-first approach, eroding trust in the model for complex engineering tasks.
Anthropic initially denied nerfing the model but has now published a technical post-mortem identifying three product-layer changes responsible for the quality issues. The company acknowledged the concerns directly, signaling a move toward greater transparency after mounting evidence from users and third-party benchmarks exposed a significant performance gap.
