It seems logical, doesn't it? To get better, more consistent output from an AI pair programmer, you should give it a clear set of instructions. My thinking went a step further: why reinvent the wheel? Why not just grab a comprehensive, battle-tested list of rules from another advanced tool and plug it into my setup?
This experiment was a continuation of my explorations into AI-assisted development, but it led me somewhere unexpected. The attempt to create a "plug-and-play" expert failed, but the lessons learned revealed a far more effective and collaborative way of working with AI.
Personal Context & Tools
My daily workflow is centered in Visual Studio Code. The key players in this experiment were:
GitHub Copilot: The core AI assistant, used in both inline and chat modes.
Copilot Instructions Feature: The ability to provide custom guidance via a
.vscode/copilot-instructions.md
file.My Goal: To make Copilot's suggestions adhere to my project's specific coding standards without constant manual correction.
I typically follow a pragmatic approach to development, valuing consistency and clarity over dogmatic adherence to any single methodology. The goal was to codify this pragmatism into rules for my AI partner.
The Failed Experiment: The "Universal Rulebook" Fallacy
My hypothesis was simple: a good set of rules for one AI agent should be good for another. I had been using the Cursor editor with the popular awesome-cursorrules
repository, and it worked well in that environment. The rules helped Cursor generate clean, consistent code. I assumed I could port this success over to GitHub Copilot in VS Code.
I copied a large chunk of these proven rules directly into my .vscode/copilot-instructions.md
file. The result was a lesson in context. Rules are not universally portable.
Context-Blind Enforcement: Even on a relatively fresh project, the agent became overly aggressive. I had a rule like
"Always use arrow functions for React components."
While my project did use arrow functions, Copilot began to apply this rule with a sledgehammer, suggesting aggressive refactors on any function it encountered, often in ways that broke the subtle stylistic patterns my team had established. It lacked judgment, creating noise and unnecessary churn.Verbose and Noisy Suggestions: Another rule I implemented was,
"Ensure all functions are documented with detailed JSDoc comments."
When I then asked Copilot to help with a simple utility function, the output was technically correct but practically absurd. An illustrative example would look something like this:TypeScript
/**
* @param {string} str The string to capitalize.
* @returns {string} The capitalized string.
*/
const capitalize = (str) => str.charAt(0).toUpperCase() + str.slice(1);
The documentation, forced by the rule, was longer than the code itself. This added cognitive overhead for simple, self-explanatory functions.
This approach abandoned the core idea of context-aware assistance. The specific negative outcomes were clear:
Quality issues: The AI-generated code felt alien. It followed the new, transplanted rules but ignored the project's existing patterns, creating a maintenance headache.
Impact on iteration: Instead of accelerating development, the rigid rule set became a bottleneck. It was like working with an overzealous assistant who had memorized a textbook but had zero practical field experience.
Quantifiable problems: I spent more time fighting, deleting, or manually editing Copilot's "helpful" but misguided suggestions than I would have spent just writing the code myself. My flow state was constantly broken.
Principles That Actually Work
After deleting the entire instruction file in frustration, I started over. This time, I discovered a couple of principles that transformed the AI from a dogmatic rule-follower into a genuine collaborator.
1. Co-Author Your Instructions with the AI
Instead of pasting in a foreign set of rules, I used a built-in VS Code feature. In my empty .vscode/copilot-instructions.md
file, I used the "Generate instructions..." command. Copilot analyzed the code in my current workspace and then proposed a set of instructions tailored to my project's reality.
It identified existing patterns and suggested rules to reinforce them. It was the complete opposite of my first experiment: instead of forcing my code to conform to the rules, the rules were generated to conform to my code. This aligns with the core tenets of Test-Driven Development (TDD), where the desired outcome (in this case, the existing code style) defines the path forward.
Benefit: The rules are organic, project-specific, and context-aware from day one.
2. Use the AI to Refine Its Own Instructions
My initial mistake was treating instructions as a static document. The effective approach is to make the AI itself a partner in refining them. My new workflow is a continuous, AI-driven feedback loop:
After a pairing session with the agent, I switch from the inline mode to the Copilot Chat view. There, I prompt it to perform a self-assessment:
"Analyze our recent conversation. Based on the guidance and corrections I provided, suggest improvements to my .vscode/copilot-instructions.md file."
The AI then reviews our interaction—including the times I had to give it procedural hints like "let's make a short overview before writing the code"
—and suggests new, more effective rules. This delegates the work of refining the instructions to the tool that will be using them. It's a powerful and efficient way to make the AI a better collaborator over time.
Example: My Evolved Instruction File
After a few of these refinement cycles, my .vscode/copilot-instructions.md
file started to look less like a generic style guide and more like a practical collaboration agreement. It's a living document, but here’s a snapshot of what it contains:
# AWS Serverless Infrastructure as Code Guidelines
This project implements a serverless application using AWS Lambda and API Gateway, with infrastructure defined in Terraform. Follow these guidelines when making changes:
## Project Architecture
### Component Structure
```
src/ # Lambda function implementations
├── hello_world.py # Example Lambda handler
└── [other_functions] # Additional Lambda functions
terraform/ # Infrastructure definition
├── modules/ # Reusable Terraform modules
└── main.tf # Main infrastructure configuration
tests/ # Integration tests
└── test_api.py # API endpoint tests
```
### Key Design Patterns
1. **Lambda Function Structure**:
```python
# Standard Lambda handler pattern - follow this structure
def lambda_handler(event, context):
logger.info("Processing request") # Always log entry
# ... function logic ...
return {
'statusCode': 200,
'body': json.dumps(result)
}
```
2. **Terraform Module Usage**:
```hcl
# Follow this pattern when adding new Lambda functions
module "my_function" {
source = "./modules/lambda"
function_name = "<service>-<action>"
source_file = "../src/<filename>.py"
handler = "<filename>.lambda_handler"
runtime = "python3.9"
}
```
## Code Quality Standards
- Write explicit, descriptive variable names over short, ambiguous ones
- Follow the existing project's coding style for consistency
- Use named constants instead of hardcoded values
Example:
```python
# Good
MAX_API_RETRIES = 3
is_api_healthy = retry_count < MAX_API_RETRIES
# Avoid
m = 3
healthy = n < m
```
# Development Approach
- Don't invent changes beyond what's explicitly requested
- Follow security-first approach in all code modifications
- Don't modify files outside the requested scope
- Don't suggest improvements to files not mentioned in the task
Example of focused scope:
```typescript
// Request: "Add email validation to User class"
// Good - only modifying requested file
class User {
validateEmail(email: string): boolean {
const emailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
return emailRegex.test(email);
}
}
// Avoid - suggesting changes to other files
// ❌ "We should also update UserRepository.ts"
// ❌ "Let's improve the existing validation in Utils.ts"
```
# Communication Style & Protocol
## Step-by-Step Communication Pattern
```
1. User Request
User: "Need to implement email validation"
2. Copilot Overview
Copilot: "Overview: Adding email validation
- Will create test for invalid email (15 lines)
- Will implement validator (20 lines)
Let's start with the test?"
3. User Approval
User: "Looks good"
4. Implementation
Copilot: *provides code in proper format*
5. Confirmation
User: "OK" or "Looks good"
```
# Test-Driven Development Workflow
## TDD Cycle
```
┌── 1. Discuss Test Requirements
│ User: "Need password validation"
│ Copilot: "Let's test minimum length first"
│
├── 2. Write Test (Red)
│ describe('PasswordValidator', () => {
│ it('requires minimum 8 characters', () => {...}
│ });
│
├── 3. Implement Code (Green)
│ class PasswordValidator {
│ isValid(password: string): boolean {...}
│ }
│
├── 4. Optional: Refactor
│ - Improve naming
│ - Remove duplication
│ - Enhance readability
│
└── 5. Next Test or Complete
- User approval required before proceeding
```
### Testing Instructions
### Test Guidelines
- Write focused, single-purpose API integration tests
- Always use `get_api_url()` to fetch endpoints dynamically
- Test edge cases explicitly and include proper delays
- Use descriptive test names
Example:
```python
# Good test names
def test_endpoint_returns_200_on_valid_input():
def test_endpoint_handles_empty_payload():
def test_endpoint_returns_404_on_invalid_path():
# Good patterns
def test_new_endpoint():
api_url = get_api_url()
time.sleep(5) # Allow API Gateway to propagate
response = requests.get(f"{api_url}/path")
assert response.status_code == 200
assert "expected_value" in response.text
```
### Running Tests
```bash
cd tests
python3 -m venv venv
source venv/bin/activate
pip install -r requirements.txt
pytest -v
```
Unexpected Discovery: Guidance Trumps Raw Power
Here's the most surprising insight: a well-instructed GitHub Copilot, even using the standard (and technically free) models, consistently produces more useful, contextually-aware code than a more advanced model like Claude 3.5 Sonnet working without instructions.
The raw power of a cutting-edge LLM often leads to more "creative" but less relevant code for the specific task at hand. My carefully curated instruction set, running on a standard model, was simply a better pair programmer for my project.
Why this matters: It proves that effective AI assistance is less about the raw intelligence of the Large Language Model and more about the quality of the guidance you provide. Thoughtful engineering and context-setting can be more valuable than simply paying for a more powerful brain.
The Central Paradox: To Think Less, You Must First Think More
This leads to the central paradox of using AI assistants effectively: to offload cognitive work to an AI, you must first do the meta-cognitive work of codifying your own development philosophy and collaboration style.
You can't just install a tool and expect it to read your mind. This paradox exists because AI assistants are not colleagues; they are incredibly sophisticated pattern matchers. Without your explicit context, they will default to the most generic patterns from their training data.
Effective use actually requires:
Self-Awareness: A clear understanding of your own coding patterns and project conventions.
Iterative Refinement: Treating the AI's instruction set as a project artifact that evolves over time.
Collaborative Mindset: Shifting from "commanding" the AI to "guiding" its process.
Forward-Looking Conclusion
The dream of a universal, plug-and-play rulebook for AI assistants is a dead end. As I found in my previous reflections on whether we can think less with AI, the answer is no—we must think differently.
The copilot-instructions.md
file is not just a configuration; it's the DNA of your AI collaborator for a specific project. It should be checked into version control and evolve alongside your README.md
and package.json
.
Stop searching for the perfect list of rules to copy and paste. Start with an empty file, click "Generate instructions...", and then use the chat to refine them after each session. The most effective AI assistant isn't the one with the most powerful model, but the one you’ve taken the time to teach.
I use a common git submodule with per-language instructions in separate markdown files
Then I instruct the LLM I’m using to read through those in addition to the code and create its own rule file, CLAUDE.md, GEMINI.md whatever.
Sometimes I tell it to refer to existing rules from other LLMs.