Interviewing in the Age of AI: Rethinking Code Pairing Sessions
Recently, I shadowed a couple of coding interviews at Thoughtworks. We allow candidates to use AI assistants, which should be straightforward—but I watched something fascinating unfold.
On one side, the interviewer struggled with guiding the session. How much help should they provide? When should they intervene? How do you evaluate someone who's getting suggestions from Copilot?
On the other side, the candidate—clearly capable but under interview stress—couldn't figure out how to showcase their strengths. Should they use AI for everything? Nothing? Where's the line between demonstrating skill and relying too heavily on assistance?
This awkward dance is playing out across the industry. We're allowing AI in interviews, but the choreography is still being figured out.
The Authenticity Problem
Traditional coding interviews have their place and will continue to exist. But as AI becomes an essential part of engineering—much like TDD became fundamental to good development practices—we need to evolve our evaluation methods. When we ask developers to write code on a whiteboard or in a sandbox without AI assistance, we're testing skills that have little bearing on their day-to-day effectiveness.
The real question isn't whether someone can remember the syntax for a binary search tree. It's whether they can:
Identify when and why to use specific algorithms
Critically evaluate AI-generated suggestions (because let's be honest, many of us are asking ChatGPT these days)
Integrate solutions into existing codebases
Communicate their reasoning clearly
A Better Approach: AI-Augmented Pair Programming
Instead of fighting the AI revolution, we should embrace it in our interviews. Here's what we've learned from running AI-assisted coding sessions:
The 50/50 Rule
Balance is everything. Spend roughly half the interview with AI assistance enabled, and half with it disabled. This reveals both strategic AI usage and fundamental coding skills.
During AI-enabled portions, listen for the candidate's thought process: "I'm asking Copilot for error handling patterns, but I need to evaluate if this fits our existing architecture." This demonstrates critical thinking, not passive consumption.
Start with Context, Not Blank Slates
Send candidates a small, well-structured codebase a few days before the interview. Think of it as the difference between asking an architect to design a house extension versus designing a house from scratch in 40-50 minutes.
When candidates arrive already familiar with your domain and patterns, you can focus on what matters: how they extend existing systems, handle edge cases, and make architectural decisions.
Focus on Integration Over Implementation
The most revealing moments happen when candidates need to connect their new code to existing systems. This is where you see their understanding of software design, their ability to follow established patterns, and their instinct for maintainable code.
What often impresses us more than clever algorithms is watching candidates thoughtfully reuse existing code patterns. When someone carefully studies how error handling is implemented across the codebase and applies those same patterns to their new feature—that demonstrates the kind of thinking that translates to productive team membership.
What This Looks Like in Practice
Here's a concrete example of how this could work: We might provide candidates with a simple e-commerce order management system. During the interview, we ask them to add order cancellation functionality.
With AI enabled: They might ask ChatGPT about refund calculation strategies, then critically evaluate the responses against our business requirements.
With AI disabled: They navigate the existing codebase, understand the order lifecycle, and integrate their cancellation logic into the established patterns.
Throughout: They explain their reasoning, ask clarifying questions, and demonstrate how they'd test their implementation.
The goal isn't a production-ready feature—it's seeing how they think, communicate, and work with both human teammates and AI tools.
The Implications
This shift requires interviewers to develop new skills too. Instead of checking if candidates remember specific APIs, we need to evaluate their judgment, communication, and architectural thinking.
But the payoff is significant: we end up hiring developers who can actually thrive in an AI-augmented world, rather than those who happen to be good at artificial interview constraints.
The developers entering our teams today will spend their careers working alongside AI. Our interviews should reflect that reality, not fight it.
The transition to AI-assisted development isn't coming—it's here. The question is whether our hiring practices will catch up before we miss out on the developers who are already thriving in this new paradigm.