Does AI Need Clear Goals? My Experiment in Turning Vague Ideas into Code
We’re all told the same thing: AI needs clear, specific, and context-rich prompts to be useful. “Garbage in, garbage out.” This is especially true in engineering.
But what if your job isn’t to execute a clear task, but to find the task?
In my current work, we do a lot of research. Goals are not clear. We receive highly abstract, one-sentence ideas that need to be explored. This research is a necessary, messy process of discovery, and it’s full of “boilerplate” actions.
This got me thinking. We assume AI is for execution, but can we use it for exploration? What happens when you feed an AI a problem that you, the engineer, don’t even fully understand yet?
I ran an experiment to find out, starting with nothing but a single, vague sentence.
My Setup: From Vague Idea to Boilerplate
My goal was to see if I could use Generative AI to shepherd a “one-sentence idea” all the way to a foundational, runnable piece of code.
My toolkit was straightforward:
The Idea: A vague user story, “#2348: As an administrator I want to add a new tariff so that it can be advertised to users who may benefit”. This was perfect because it was so vague—what’s a “tariff”? How is it “advertised”?
The “Analyst” AI: I used Gemini 2.5 Pro to act as a Product Owner and flesh out this vague idea.
The “Developer” AI: I then used GitHub Copilot (CPT 4.1) in IntelliJ to write the boilerplate code.
The Project: All this was done in the context of TW “Joy of Energy” project, a Java Spring Boot application.
The plan was a two-part workflow:
Part 1: AI as Business Analyst. Feed the vague story to Gemini and ask it to define the requirement.
Part 2: AI as Boilerplate Generator. Feed the AI-generated spec to Copilot and ask it to write the code.
The Failed Experiment (That Was Actually a Success)
My first attempts were a perfect illustration of the “AI is context-blind” problem. The “failure” wasn’t that the AI was useless; it’s that its first drafts were wrong in very specific, instructive ways.
Failure 1: The AI “Product Owner” Became a Tech Lead I asked Gemini to act as a Product Owner and flesh out the story . It made a “very popular mistake”: it skipped the “what” and “why” and jumped straight to the “how.”
The very first draft of the spec it gave me wasn’t a user story; it was a technical task. It immediately suggested a JPA @Entity and defined fields like id as a UUID. It was already designing the database schema.
This is exactly what you don’t want from a user story, and it’s a common trap where the AI tries to be the engineer, not the analyst. As I’ve written before, the AI’s job is to reflect our needs, not just give us a technical answer (you can read more on that idea here: How GenAI Helps Engineers Write Better).
I had to intervene, critique the output, and explicitly ask it to “Change database to more abstract system” to get the clean, implementation-agnostic user story and Acceptance Criteria (ACs) I actually needed .
Failure 2: The AI “Developer” Was a Clumsy New Hire After I had a clean spec, I gave it to GitHub Copilot with a clear prompt: generate a POJO, an in-memory Service, and a Controller .
The code it generated was not “copy-paste and run”.
Wrong Package Structure: It invented a “by-feature” package structure (
com.joi.energy.tariff). My project uses a “by-layer” structure (uk.tw.energy.domain,uk.tw.energy.service, etc.) .Missing Dependencies: It correctly suggested using
jakarta.validationannotations —a great idea!—but my project didn’t have that dependency.Minor (Human) Errors: It even forgot the
@Serviceannotation on theTariffService, a simple mistake I’ve made myself a dozen times.
If I were a junior engineer, I would have been blocked or, worse, just pasted it all in, breaking the project’s architecture.
Principles That Actually Work
These “failures” led me to the real principles of using AI for this kind of work.
1. The AI is a “Demultiplicator,” Not a Supercharger This was my single most important insight. A supercharger just makes the engine spin faster. A demultiplicator (like a reduction gear) changes the nature of the work, trading raw speed for torque.
The AI is a demultiplicator for my brain.
When I was iterating on the user story, I didn’t think about “how to write these words or if it sounds good”. I was 100% focused on the business goals. The AI handled the typing, and I handled the validating. This is a profound shift. It took me 30 minutes to get a solid user story, not because I typed fast, but because I thought fast, using the AI’s draft as a disposable starting point.
2. The Engineer’s New Job: Strategist and Context-Provider The AI’s mistakes weren’t stupid; they were context-blind. This reveals the engineer’s true role in an AI-augmented workflow: we are the “Reviewer and Strategist”.
My job wasn’t to write getters and setters. My job was to make two high-level strategic decisions:
“The AI is right,
jakarta.validationis a good idea. I will add that dependency”.“The AI is wrong about the package structure. I will correct it to follow our existing pattern”.
The AI’s “flawed” draft actually forced me to think strategically about my project’s architecture and dependencies.
3. Embrace the “90% Win” and the Iterative Loop The AI’s output doesn’t need to be 100% perfect to be valuable. The boilerplate it generated, despite its flaws, was a “90% win”. It saved me from the “boring boilerplate” and the hours I would have spent on Stack Overflow as a junior engineer.
More importantly, the AI’s mistakes are part of the value. That wrong package structure? It’s a great “recommendation for reorganizing your project” and a perfect topic to bring to a team huddle.
My Unexpected Discovery: “1:0 to AI”
The most surprising moment came during the boilerplate generation. I asked for three files (POJO, Service, Controller). The AI gave me four.
It proactively and correctly created a TariffType.java Enum (FLAT_RATE, TIME_OF_USE).
This was a perfect “micro-improvement”. I called it “1:0 to AI”. I was so focused on the “big picture” of the architecture that I missed this small, obvious detail. This “separating of responsibilities” is incredibly powerful : the AI handles the small details while I focus on the larger strategic goals.
The Central Paradox: AI’s Flaws Are Its Greatest Strength
This leads to the central paradox: The AI is terrible at handling vague, abstract ideas... and yet, it’s the best tool I have for the job.
Why? Because its value isn’t in giving you the right answer. Its value is in its ability to instantly turn a “blank page” into a flawed, tangible draft that you can critique.
The AI’s initial, flawed responses—the over-technical user story, the context-blind package structure—are its most valuable feature. They act as a mirror, forcing the engineer to define the context and make the strategic decisions. It can’t read your mind, so it forces you to figure out what’s in it.
Effective use doesn’t require a perfect prompt. It requires an engineer to stop acting like a typist and start acting like an editor, a critic, and a strategist.
Conclusion: From Vague to Validated
The AI didn’t solve my vague problem. It gave me the tools to solve it myself, faster and at a higher level of abstraction.
By delegating the “boring boiler plate code” , I was able to stay focused on the “big picture” and “business needs”. This workflow is a powerful way to accelerate research, allowing us to build, test, and throw away foundational ideas at a speed we couldn’t before.
The AI isn’t here to replace us. It’s here to take the routine work and free us to focus on the hard parts. It’s a “demultiplicator” that gives us the torque to move from a one-sentence idea to a validated, runnable foundation —flaws and all.

