Does Delegating to AI Mean We Can Finally Be Lazy Managers?
1. The Hook
We often sell AI adoption to our bosses (and ourselves) with the promise of speed. We imagine a future where we toss a vague request over the wall—”fix the build,” “export the data,” “optimize the query”—and the AI handles the rest while we grab a coffee.
But my recent experiments with Jules, Google’s new AI agent, suggest the opposite is true. The more “autonomy” I gave the AI, the more mediocre the code became. This leads to an uncomfortable question: Does effective AI delegation actually require more management overhead, not less?
2. Context & Tools
I’ve been experimenting with Jules, testing its ability to act as a “Junior Developer” in my Spring Boot repository, joyofenergy-java.
In my previous explorations, I looked at Pair-Authoring with an AI and the Context Window Paradox. This time, I wanted to test the difference between Abdication (lazy delegation) and Navigation (structured delegation) when asking an agent to build a feature from scratch.
3. The Failed Experiment: The “Friday Afternoon” Prompt
I set up a scenario we’ve all faced: It’s Friday afternoon, I want a new feature shipped, and I don’t want to think about the implementation details.
I gave Jules the “Lazy Manager” prompt:
“Jules, create an endpoint to export meter readings as a CSV file. Use the existing MeterReadingService.”
I intentionally withheld constraints. I didn’t mention memory usage, libraries, or formatting.
The Result?
Technically, it worked. Jules created a CsvService, updated the controller, and passed the tests. But structurally, it was a time-bomb.
Memory Unsafety: It loaded the entire dataset into a
Listin memory before writing the response. For a smart meter with 100,000 readings, this is anOutOfMemoryErrorwaiting to happen.Library Bloat: It generated a new service class (
CsvService) where a simple stream in the controller would have sufficed.Junior Mistakes: It used standard Java formatting without considering how a user would actually open the file in Excel.
The “lazy” prompt produced “lazy” code: functional, but dangerous at scale. It validated my fear that More Powerful AI Doesn’t Always Mean Faster Fixes.
4. Principles That Actually Work: The “Brief”
I reset the experiment. This time, I treated Jules like a Senior Engineer would treat a Junior: I wrote a spec.
I uploaded a file named feature-csv-export.md containing strict constraints:
No New Dependencies: Do not add
apache-commonsoropencsv.Memory Safety: Do not load lists into memory; stream directly to the
HttpServletResponse.Strict Formatting: Use
yyyy-MM-dd HH:mm.
I then prompted:
“Jules, I’ve uploaded a spec file... Please refactor the implementation to strictly follow these constraints.”
The Outcome:
The difference was night and day.
Architectural Safety: Jules implemented a streaming solution using
PrintWriter, avoiding the memory bottleneck entirely.Dependency Management: It correctly added
jakarta.servlet-apias acompileOnlydependency, respecting the “no runtime bloat” rule.Test Integrity: It initially failed to test the controller response correctly, but because I had defined the “correct” output in the spec, I could guide it to fix the assertion logic.
5. Unexpected Discovery: The “Spec” as a Guardrail
The most surprising insight was that Jules didn’t just follow the instructions—it used the spec file as a defense mechanism against bad code.
When I ran the “Lazy” experiment, Jules defaulted to the path of least resistance (loading data into memory). When I provided the “Brief,” Jules shifted behavior entirely. It didn’t just write code; it navigated the constraints.
This confirms a theory I touched on in Can We Make AI Code Assistants Smarter by Asking Them to Write Their Own Rules? The AI performs best not when it has “creative freedom,” but when it is boxed in by rigid technical constraints. The “Senior Engineer” input wasn’t the code I wrote, but the boundaries I set.
6. The Central Paradox
This brings us to the Delegation Paradox:
To get an AI agent to work autonomously, you must micromanage the requirements.
If you want to be “lazy” during the implementation phase (execution), you must be hyper-active during the definition phase (specification). You cannot abdicate both.
Abdication (Vague prompt) -> Requires heavy code review and refactoring later.
Navigation (Detailed spec) -> Requires heavy upfront thought, but produces near-production-ready code.
We aren’t thinking less with AI; we are shifting when we think.
7. Forward-Looking Conclusion
Tools like Jules are shifting the developer’s role from “writer of code” to “architect of constraints.”
If you treat your AI agent like a magic wand that reads your mind, you will build technical debt at record speeds. But if you treat it like a talented but literal-minded junior developer who needs a solid brief, it becomes a powerful force multiplier.
The future of engineering isn’t about writing the perfect function; it’s about writing the perfect spec.

