Over 10 years we help companies reach their financial and branding goals. Engitech is a values-driven technology agency dedicated.

Gallery

Contacts

411 University St, Seattle, USA

engitech@oceanthemes.net

+1 -800-456-478-23

Skip to content
Technology Daily Brief Vendor Claim

Anthropic Identifies Three Product-Layer Bugs Behind Claude's April Degradation, All Reverted April 23

2 min read Anthropic Partial
Anthropic has published a technical post-mortem disclosing three product-layer issues that degraded Claude's performance over the past several weeks, and confirmed all changes were reverted as of April 23 with subscriber usage limits reset as compensation.

The fixes are already shipped. That’s the first thing to establish.

As of April 23, Anthropic has reversed all three product-layer changes that degraded Claude’s performance and reset subscriber usage limits as compensation. The resolution came alongside a technical post-mortem document, published by Anthropic and reported by VentureBeat, that names the specific causes rather than attributing the degradation to vague “service issues.”

Context for readers who followed the Claude Opus 4.7 GA launch: after Opus 4.7 reached general availability, users began reporting that the model felt less capable, shorter responses, less coherent reasoning, repetitive outputs. Those reports were accurate. They were caused by changes made at the product layer, not by anything that touched the underlying model weights.

According to Anthropic’s post-mortem, three separate issues accumulated:

Caching logic error (March 26): A bug caused the model to clear its reasoning context, its “thinking history”, at every turn rather than preserving it across a conversation. The practical effect was that the model couldn’t build on prior reasoning steps, producing outputs that felt repetitive or shallow. – System prompt verbosity limits (April 16): Restrictions on system prompt length were added. Anthropic reported a 3% drop in internal coding quality evaluations attributed to this change. That figure is a vendor-reported internal metric and should be read as such, it reflects Anthropic’s own evaluation, not independently confirmed performance data. – Reasoning effort regression: A third issue involved degraded allocation of reasoning effort. The post-mortem identifies this separately from the caching bug, suggesting it was a distinct configuration change.

All three are now reversed.

The pattern here is more instructive than the specific bugs. Model quality can degrade silently at the product layer, caching configurations, prompt length limits, resource allocation parameters, without any change to model weights. Users experience the degradation as the model “getting worse,” which is accurate from a performance standpoint, but the cause is invisible without a post-mortem like this one.

For enterprise teams running Claude in production: this is an argument for behavioral evaluation on a regular cadence, not just at deployment. If your integration relies on extended reasoning chains or multi-turn coherence, a caching change can break it without any API version change or model update notice.

The quality of the disclosure matters here too. Anthropic named the bugs, dated them, and published the mechanics. That’s a higher standard of transparency than most AI providers maintain after performance incidents. It doesn’t eliminate the risk of future degradation, but it does create accountability for tracking and reversing problems when they occur.

What to watch: whether independent evaluators (Epoch AI and others) publish pre/post benchmark comparisons that corroborate the 3% coding quality figure and confirm the reversal fully restored prior performance levels.

View Source
More Technology intelligence
View all Technology
Related Coverage

More from April 24, 2026

Stay ahead on Technology

Get verified AI intelligence delivered daily. No hype, no speculation, just what matters.

Explore the AI News Hub