Does More Powerful AI Mean Slower Fixes?
Is it possible that our most advanced AI coding assistants are actually slowing us down? This question felt absurd as my team was heads-down, polishing our UI for a major release. We were in the final stretch, tackling a long list of small, cosmetic changes—the kind of work that should be quick. Yet, I found my workflow clogged, not by the complexity of the tasks, but by the "helpfulness" of my AI partner.
My Setup: The Final Polish
Our environment was standard: a React codebase, a Git workflow with peer reviews, and an integrated AI coding assistant. My goal was to rapidly work through a backlog of minor UI tickets. Any UI update is a form of refactoring, and for that, I strictly follow a philosophy of making changes in what Javi López aptly calls "a lot of tiny steps," a pattern also known in classic terms as Refactoring In Very Small Steps. This ensures each commit is atomic and easy for my teammates to review. I was relying on the AI’s "Agent mode"—its capability to autonomously modify the codebase—expecting it to align with this micro-step approach. The reality was quite different.
When 'Help' Became a Hindrance
The core problem was that the AI agent consistently over-engineered solutions for trivial problems. It treated every request for a small change as an invitation to refactor the entire component. This isn't a failure of intelligence, but a misalignment of goals: my goal was a minimal diff, whereas the agent's goal is often holistic file correctness, aiming to fix all potential issues it identifies in one pass. Crucially, even when I gave it explicit, TDD-style instructions to only perform a single, minimal action, it still defaulted to making broad, sweeping changes.
Example 1: A Simple CSS Tweak
I needed to make a submit button full-width on mobile devices. A straightforward task.
The fix that was actually needed:
CSS
@media (max-width: 50rem) {
.formSubmitMobileWrapper button {
width: 100%;
}
}
I prompted the AI agent: "Only add a new media query for screens under 50rem to the .formSubmitMobileWrapper button
class to set its width to 100%. Do not touch any other code."
Despite the clear instruction, the agent generated a massive diff, rewriting existing desktop styles and restructuring the entire CSS class.
Time Wasted: I spent 15 minutes untangling the AI's suggestion, versus the 2 minutes it would have taken to write the CSS myself.
Quality Issues: The generated code created a high cognitive load for code review. A teammate would have to ask, "Why did we refactor all the button styles just to change one mobile property?"
Structural Problems: This approach created bloated commits, making our Git history noisy and directly violating the "very small steps" principle.
Example 2: A Minor Accessibility Improvement
Next, I picked up a ticket to improve the accessibility of our card components. Again, I gave a precise instruction: "Add a role='region'
attribute to the parent div of the Card component."
Instead of a one-line change, the agent tried to rewrite half the component's JSX structure, arguing it was for "better semantic clarity" and completely ignoring my focused instruction.
Principles That Actually Work
This friction forced me to re-evaluate how I was using the tool. I realized the key is to match the tool's capability to the task's scope. This led me to two guiding principles.
1. Use AI Chat for Suggestions, Not Implementation
For micro-changes, the AI's "Chat mode" is far more effective. By treating it as a context-aware search engine, I can ask for targeted advice.
Prompt: "What's the best CSS to make this button full-width on mobile?"
Result: It gives me the precise, minimal code snippet I need. I copy, paste, and commit. The change is atomic and review is trivial.
This keeps the developer in control and prevents the AI from making unsolicited "improvements." The benefits are clear: smaller pull requests and faster review cycles. This aligns with research from Faros AI, which notes that while AI can boost individual developer throughput, it often leads to ballooning review queues. I've written more about this in my article, "Can we make AI code assistants smarter?".
2. Reserve AI Agents for Scaffolding and True Refactoring
The autonomous "Agent mode" is incredibly powerful, but its strength lies in larger, well-defined tasks, not surgical strikes.
Good use case: "Create a new React component for a user profile page with an avatar, name, and bio section. Include Storybook stories and a basic test."
Bad use case: "Add a
margin-top
to the avatar in the user profile component."
Using an agent is best when the expected outcome is a significant amount of new or changed code.
This simple matrix illustrates the core principle: for small-scoped tasks, a suggestion-based AI interaction is most effective, while large-scoped tasks are better suited for autonomous AI execution.
Unexpected Discovery: AI Forced Me to Define "Small"
The most surprising insight was that the AI forced me to be more precise in defining a "small change." My heuristic is now this: if the task's description is longer than the code I expect to write, it will be good to use Chat mode.
A task like "Make the button full-width on mobile" is a perfect example. The description is simple, and the code is just a few lines. The AI agent, however, interprets this as a symptom of a larger problem ("This component is not fully responsive") and tries to solve that instead. This mental checkpoint prevents me from accidentally turning a 5-minute task into a 30-minute ordeal.
The Autonomy vs. Precision Trade-Off
This leads to a central, counterintuitive truth: the more autonomy you grant an AI coding assistant, the less precision you may get for small, targeted tasks.
This isn't a paradox; it's a trade-off. Autonomous agents are optimized for holistic correctness. They don't just see the three lines of CSS you want to add; they see the entire file and its potential imperfections. Their goal is to bring the whole file into a state of grace, which directly conflicts with the goal of making a minimal, targeted change.
Effective use, therefore, requires the developer to:
Explicitly define the scope of the change before starting.
Choose the right mode for the job (Chat vs. Agent).
Maintain control and view the AI as a suggester, not an infallible executor, for routine work.
A More Thoughtful Partnership
My journey through pre-release UI tweaks taught me a crucial lesson. AI coding tools aren't a simple "on/off" switch for productivity. They are a suite of capabilities, each with an appropriate use case. An autonomous agent is a powerful ally for building new things from the ground up, but for the delicate art of finishing and polishing, a simple chat-based suggestion is often faster, cleaner, and more respectful of my teammates' time. The real skill in this new era of software development is not just in writing clever prompts, but in having the wisdom to choose the right tool for the job.