Home » Tech » Anthropic’s Claude Code 2.1.20 Hides File Actions, Angering Developers

Anthropic’s Claude Code 2.1.20 Hides File Actions, Angering Developers

by Lisa Park - Tech Editor

Anthropic, the AI safety and research company behind the Claude chatbot, has rolled back a user interface change to its Claude Code tool after a wave of developer complaints. The update, version 2.1.20, initially hid details about which files the AI was accessing while coding, displaying only generalized summaries like “Read 3 files” instead of specific file paths. Developers quickly protested, citing concerns about security, debugging, and cost control.

The change, implemented on , collapsed the progress output, requiring users to press a keyboard shortcut (Ctrl+O) to reveal the file names and line counts previously displayed. According to a post complaining about the change, the update was described as “dumbing down” the tool. While Anthropic framed the alteration as a simplification of the user interface, intended to reduce “noise” and allow developers to focus on code diffs and outputs, the response from the developer community was overwhelmingly negative.

Developers argued that knowing precisely which files Claude Code was reading, writing, or modifying is crucial for several reasons. Security is a primary concern; understanding file access allows developers to quickly identify potential vulnerabilities or unauthorized access. Debugging also becomes significantly more difficult without this information, as it hinders the ability to trace the AI’s actions and understand its reasoning. Developers noted the financial implications, explaining that being able to monitor file access helps them interrupt Claude if it begins working with irrelevant or incorrect files, thereby avoiding unnecessary token consumption – a cost that can quickly add up, with Claude Code subscriptions reaching up to $200 per month.

“When I’m working on a complex codebase, knowing what context Claude is pulling helps me catch mistakes early and steer the conversation,” one developer wrote, as reported by multiple sources. Another developer echoed this sentiment, stating that the change was an “idiotic removal of valuable information.”

Boris Cherny, creator and head of Claude Code at Anthropic, initially defended the update, stating it wasn’t intended as a “vibe coding feature” but rather a way to streamline the UI. He suggested users try the new version for a few days and pointed out that Anthropic’s own developers had found the reduced noise beneficial. Cherny also suggested enabling verbose mode as an alternative for those who desired more detail.

However, the verbose mode solution proved unsatisfactory. Developers found it generated an excessive amount of output, including internal reasoning logs, hook outputs, and complete transcripts of subagents, making it difficult to isolate the specific file access information they needed. One developer described verbose mode as “not a viable alternative,” citing the overwhelming amount of extraneous data.

The situation took an ironic turn as Anthropic began to address the complaints by gradually removing elements from verbose mode, effectively turning it into the simple toggle for file path visibility that developers had originally requested. This series of fixes, rather than a single configuration option, created new problems for users who were already relying on verbose mode for its intended purpose.

The backlash prompted some developers to revert to version 2.1.19 of Claude Code, demonstrating the strength of their dissatisfaction with the changes. The incident highlights a growing tension between AI developers seeking to simplify user interfaces and the needs of professional users who require granular control and transparency. The controversy also underscores the importance of transparency in AI-powered coding tools, particularly as they become increasingly integrated into complex software development workflows.

The rapid response from Anthropic, and the eventual rollback of the change, suggests the company is listening to its user base. However, the episode raises questions about the development process and the prioritization of user feedback when implementing significant UI alterations. The incident serves as a cautionary tale for other AI tool developers: simplifying the interface shouldn’t come at the expense of essential functionality and transparency for power users.

The timing of this issue is particularly noteworthy, as it coincides with a broader industry discussion about the need for built-in AI transparency. Former GitHub CEO Thomas Dohmke recently founded Entire with $60 million in funding, specifically to address gaps in AI code transparency, suggesting a growing recognition of the importance of understanding how AI systems operate.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.