The Sim Racing Setup
I’ve spent some time in this industry to know that the promise of “plug-and-play” is usually a lie told to people who don’t have to maintain the results. We’ve grown accustomed to our IDEs functioning almost perfectly the moment we install them, which has created a bit of a lazy habit in our collective psyche. We expect our tools to meet us where we are without any effort on our part. But when I look at the current state of Generative AI, I’m reminded much more of high-performance sim racing or building a custom PC. You can just plug a wheel into a desk and start driving, but you won’t actually feel the road, and you certainly won’t win any races. To get professional results, you have to embrace the preparation. The setup isn’t an annoying preamble; it is the work itself.
Hierarchies of Instruction
In my recent experiments, I’ve moved away from treating ChatGPT as a blank slate. Instead, I’ve been refining a two-tier configuration that relies on Project Instructions, which are specific directives tailored to a particular codebase or business domain that work in tandem with my global settings. I found that by splitting instructions between a global level—who I am and how I want to be spoken to—and a project level, I could stop the AI from hallucinating a generic solution. This isn’t about giving the AI a long list of rules to follow blindly. It’s about creating a runtime environment that respects the reality of my actual repository.
Slicing Against the Grain
There is a fundamental tension in how we break down work for a machine versus how we break it down for a human. In the agile world, we are taught the value of a Vertical Slice, which is a functional piece of work that touches every layer of the system to deliver a complete feature. When I am working with AI, however, I’ve found that this approach often leads to a mess. I’ve started practicing a methodology where I break a complex story into isolated, technical layers—repository, use case, then controller—as separate steps. I didn’t set out to slice the “layers of a pie” instead of the “slices of a cake” because I thought it was a better way to design software; I did it because I found it simply works better for the AI’s current reasoning capabilities. It’s an empirical adjustment. By forcing the AI to focus on one technical layer at a time, I prevent the logic from becoming a tangled knot of half-finished abstractions.
The Logic of Two Flows
Within these project instructions, I’ve found success by defining two distinct paths of interaction. I call these Flow-Based Prompts, a system where the AI knows whether we are in an analysis phase or an execution phase.
Flow 1: Analysis & Slicing
- Goal: Digest the Jira story and propose the technical slices.
- Output: A structured implementation plan.
Flow 2: Prompt Generation
- Goal: Create a specific instruction for GitHub Copilot.
- Output: A isolated prompt for a single technical layer.In the first flow, the AI acts as a sounding board, helping me decompose a story and identify the technical boundaries. In the second flow, it transitions into a generator, producing the exact context needed for GitHub Copilot to write the code. This prevents the “handoff” problem where context gets lost between the chat window and the code editor. It ensures that when I move to my IDE, the instructions are already tailored to the specific slice of the system I am currently building.
The Evolutionary Tree
Of course, I’ve been skeptical of “perfectly automated” prompts that try to handle every edge case from the start. I’ve discarded that idea for now because, at this stage of my understanding, those prompts usually just add unnecessary weight and noise. However, I don’t think we are stuck here. I suspect that as we get better at this, our instruction sets will evolve into something more like a tree. The system won’t just be a static list of rules; it will be an adaptive structure that detects the current context of the work and branches out to provide exactly the right level of detail.
We are moving toward a future where the tool detects the type of instruction needed rather than requiring us to shout the same commands every morning.
For now, the manual setup is where the value lives. It’s the difference between a tool that guesses and a tool that knows.
Back to Reality
In the end, I’m keeping the slicing methodology and the dual-flow instruction setup in my toolkit. I’ve set aside the hunt for a “magic” prompt that solves everything in one go. Reality is messy, and our tools need to be flexible enough to reflect that. We should be skeptical of any AI workflow that promises to do the thinking for us. The real value is in the preparation—the configuration of the environment—that allows us to do our best thinking with a bit less friction.
Further Reading / Related Reflections
