Anthropic's Claude Code, once a favored AI assistant for complex engineering tasks, now faces accusations of significant performance degradation. An official GitHub issue opened by AMD's AI group director details a steep decline in Claude Code's reasoning quality and reliability since early March, prompting the team to switch providers, according to The Register . This erosion of trust raises critical questions about the stability and future trajectory of advanced AI coding tools.
What Triggered the Performance Drop?
Stella Laurenzo, director of the AI group at chipmaker AMD, initiated the discussion with a detailed complaint on GitHub, supported by a LinkedIn post . Her team concluded that Claude Code "cannot be trusted to perform complex engineering tasks" after months of consistent use in a high-complexity work environment. Laurenzo's assessment, echoed by other senior engineers on her team, points to a clear drop-off in the AI's efficacy.Their analysis of 6,852 Claude Code sessions, which included 234,760 tool calls and 17,871 thinking blocks, revealed alarming trends. The number of "stop-hook violations"—indicators of the AI's "laziness," such as prematurely stopping thinking processes or avoiding responsibility—skyrocketed. This metric went from zero before March 8th to an average of 10 violations per day by the end of last month. Claude Code's engagement with code also decreased dramatically. The average number of times it would read a piece of code before making changes fell from 6.6 reads to just 2 by late March. Simultaneously, the AI began rewriting entire files more frequently instead of making targeted edits. Laurenzo attributes these changes directly to the early March deployment of Claude Code version 2.1.69, which introduced "thinking content redaction." This feature, enabled by default, strips the AI's internal thought process from API responses, leaving users unaware of Claude Code's reasoning.







