San Francisco: AI company Anthropic has acknowledged and resolved multiple issues that led users to believe its coding tool, Claude Code, had become less effective in recent weeks. Following widespread complaints, the company identified three key problems and confirmed that fixes have now been rolled out.

Claude Code, powered by Anthropic’s Claude models, is widely used for generating code from simple prompts. However, users across platforms such as X and Reddit had raised concerns that the tool’s performance had noticeably declined.

Users report drop in performance

Over the past few weeks, several users took to social media to express frustration with Claude Code. Many described the tool as inconsistent, less responsive, and even “unusable” in some cases.

Some users compared its behaviour to that of an unreliable employee, claiming it ignored instructions or deviated from assigned tasks. These complaints quickly gained traction, raising questions about the reliability of AI-powered coding assistants.

This is not the first time Anthropic has faced scrutiny, as the company had earlier been criticised over access limitations to its Pro plan.

Company denies intentional downgrade

Responding to the backlash, Anthropic clarified that it had not intentionally reduced the performance of its models. The company stated, “We never intentionally degrade our models,” emphasising that the issues were unintended side effects of recent updates.

Boris Cherny, head of Claude Code, described the investigation into the problem as one of the most complex the team has undertaken.

Anthropic also confirmed that the issues primarily affected Claude Code, the Claude Agent SDK, and Claude Cowork. Importantly, its API and inference systems remained unaffected.

Three key issues identified

After an internal probe, Anthropic pinpointed three specific changes that contributed to the perceived decline in performance.

Change in reasoning settings

The first issue dates back to March 4, when the company adjusted the default reasoning setting from “high” to “medium”. This change was intended to improve response speed.

However, users preferred more detailed and accurate outputs, even if it meant slightly slower responses. The shift resulted in answers that appeared less intelligent or thorough.

Memory bug affecting sessions

The second issue emerged on March 26 and involved how Claude handled inactive sessions. The system was designed to clear older context after an hour of inactivity.

However, a bug caused this memory reset to occur repeatedly during ongoing sessions. This made the AI appear forgetful and led to repetitive or inconsistent responses.

Reduced verbosity impacting quality

The third issue arose on April 16, when Anthropic reduced the verbosity of Claude’s responses. While the goal was to make outputs more concise, the change negatively impacted coding quality.

Combined with other prompt adjustments, this made the AI’s responses less useful for developers relying on detailed guidance.

Fixes implemented and limits reset

Anthropic confirmed that all three issues were addressed in a patch released on April 20. In addition to fixing the bugs, the company has reset user limits, allowing broader access to Claude Code features.

The combined effect of the earlier changes had created what Anthropic described as “broad and inconsistent degradation”, as different users experienced different issues at different times.

Restoring user confidence

With the fixes now in place, Anthropic says users should no longer face the same problems. The company hopes the update will restore confidence in Claude Code as a reliable AI coding assistant.

The episode highlights the challenges of maintaining performance consistency in rapidly evolving AI systems, where even minor adjustments can significantly affect user experience.

Conclusion

Anthropic’s swift response to user feedback and its transparency in identifying the root causes reflect the growing importance of accountability in the AI space. As tools like Claude Code become integral to development workflows, ensuring stability and reliability remains crucial.

While the recent issues caused frustration, the fixes may help reinforce trust among users and underline the need for continuous monitoring in AI-driven platforms.