AI StrategyArticle has under 2-min AI recap audio6 min read2026-02-14

Hiring Engineers in the Age of AI

The reality of interviewing frontend engineers today. What separates true engineers from vibecoders in an AI-augmented world.

Geddy
Geddy
Senior Web Engineer / Lead

Listen to under 2-min AI recap TL;DR ↓

0:00

Join the conversation

Read and Join discussion

Hiring Engineers in the Age of AI

The Reality of Interviewing Frontend Engineers Today

I've been interviewing candidates again lately as a frontend engineering lead. Here's the reality I've faced.

Developers call themselves seniors. But when pair coding is mentioned — some get uncomfortable. Some openly admit they're worried they'd need AI tools...

To be fair — the pair coding I run isn't a pressure cooker. It's a 1.5-hour meeting, with the coding challenge planned for about 45 minutes at a comfortable pace. Nothing complex. It's designed to reveal structural thinking, the ability to spot and solve issues, and implement reasonably basic functionality. It's not a whiteboard hazing ritual.

Could an anxious candidate underperform in this setup? Sure. But the alternative — trusting the take-home task they may not have written themselves, trusting the CV they may have inflated — has its own blind spots. No interview format is perfect. Pair coding at least gives you a live signal. I'd rather acknowledge that trade-off honestly than pretend any method is foolproof.

Not so long ago, we were writing code. Structuring apps. Engineering, in other words. I'd give AI credit for delivering reasonably written code in less than the past 12 months, at best. Only very recently I am observing a boost in its ability to recognise and reuse patterns, plan more efficiently, implement more thoughtfully. Thoughtfully. That word — about artificial intelligence.

The Take-Home Task Problem

Async home tasks used to be another way to test skills. Candidates would come up with a solution using their own brain and hands. They owned the solution. They could dig into the code and answer what they did and why they did it.

We're getting away from this.

Don't get me wrong — I'm in for this ride with AI. An incredible breakthrough, especially in software. Productivity rocket. Multi-agents. Parallelisation. Systems integration. Automation with AI behind the scenes. MCP servers. AIs talking to AIs. N8N workflows where a single human prompt triggers entire chains of API calls and orchestrated sequences. The leverage is real.

But engineers must own their solutions. Understand them well.

Where Is the Line?

The question isn't whether candidates should use AI. The question is whether they can stand behind the code they ship. Can they explain the trade-offs? Walk through the architecture decisions? Can they explain why we are doing it this way, so non-technical colleagues can understand and contribute to the solution. Debug it under pressure without reaching for a prompt?

The way I am still seeing is that the title "senior" should mean you've internalised the fundamentals deeply enough that AI amplifies your thinking, not replaces it. AI is just an assistant / executor of a precisely controlled direction. And I still hold on to this idea even with the less experienced engineers.

The Divergence

Here's what I'm seeing in the market. The engineers who are sharp-minded, navigate well within a codebase, and exploit the power of AI coding tools — they're a rarity. And they're expensive. Companies start bidding, slapping counter-offers to keep them or poach them. That kind of engineer is in genuine demand.

Meanwhile, AI is increasingly helping less experienced developers promote themselves as more than they're capable of. But what happens when the AI can't help you? The model hallucinates. The context window runs out. The API goes down mid-sprint. The codebase is too proprietary for the model to reason about. "What do you do when the AI can't help you?" — that's a real question, and increasingly a revealing one.

I believe we're heading towards a very interesting split. On one side — the true engineers. The ones who were here before AI. Who understand systems, architecture, trade-offs. AI made them terrifyingly productive. On the other side — the vibecoders. The ones who prompt their way to a solution but can't stand behind it when things break.

Vibing might actually be fine for a lot of business needs. If someone can prompt their way to a working app that ships and makes money, many companies will take that deal. But this stops working when you have a grown business with complex technical nature, where every adjustment carries risk and responsibility. That's where vibe coding becomes increasingly dangerous. Ownership becomes crucial. Logic and structural thinking become critical. There's a probability of success with vibe coding, sure. But the probability of failure shouldn't be ignored — and in high-stakes systems, you can't afford to.

What separates the engineer from the vibecoder isn't whether they use AI. I use AI heavily. I write specs, context files, let coding agents generate implementation. But I also dig into the code. I find inefficiencies. I steer corrections. Sometimes small changes are quicker and more efficient to just do myself than to prompt for. The real syntax doesn't get beaten out of you if you ever did real coding — and that instinct, that ability to audit the output and know when it's wrong, is the actual indicator of seniority. Don't get me wrong, there are plenty of devs who deliver with AI, perhaps not in the cleanest or most efficient ways. But when we're talking about seniors — that editorial layer is the difference.

And the middle? They're either hustling towards the strong side, thriving and growing — or slowly sliding towards the vibers. The ones who'll need more and more context, more tokens, more prompting to keep up with a growing codebase they don't fully understand.

This is only my opinion. Time might reshape this view. But right now, the divergence feels real.

2026 Hiring Red Flags & Green Flags

After enough interviews, patterns emerge. Here's what I look for now.

Red Flags

  • Panics or deflects when you say "no AI for this 45-min pair session"
  • Can demo a flashy app but can't explain trade-offs without prompting
  • "I let AI handle the details" — without showing any auditing or debug instinct
  • Gets stuck the moment the happy path breaks

Green Flags

  • Jumps at pair programming — treats it like a collaboration, not a threat
  • When code breaks: immediately reasons about root cause, not just re-prompts
  • Asks sharp questions back about your stack's pain points and architecture
  • Uses AI as a multiplier, but clearly owns the output

The distinction isn't about AI usage. It's about whether the thinking behind the code is theirs.

The Leap Ahead

Here's what I also believe — we're getting close to a leap. Closer than most realise.

The reality is, we will need low to no coding for the majority of things we're building today. That's not a distant future. It's approaching fast. And when it arrives, we'll have to find new ways to balance how we interview, how we assess competence, how we define what software engineering even is.

Perhaps it evolves into something closer to product engineering — a discipline where the value isn't in writing the code, but in shaping the system. Defining the outcome. Understanding the user, the architecture, the constraints, the trade-offs. Engineering the what and why, not just the how.

I believe we're in the middle of the way there. The best thing we can do right now is recognise the direction and start preparing — both ourselves and the way we evaluate the people we work with.


What's your experience interviewing engineers lately? I'd like to hear how other leads are navigating this.

TL;DR

  • AI has made it trivially easy to generate plausible-looking code, which makes interviews that test output rather than thinking largely useless
  • The signal worth looking for is whether a candidate understands why something works — not just that it works
  • Vibecoders can ship features in calm conditions; they fall apart when the system misbehaves or the requirements are ambiguous
  • Strong engineers use AI to go faster — they still own the decisions, the architecture, and the debugging when things go sideways
  • The interview bar needs to move from "can you produce code" to "can you reason about code you didn't write"

If AI writes the code but the engineer can't explain it, you haven't hired an engineer — you've hired a prompt.

Geddy

Geddy

Senior Web Engineer / Lead

Engineering leadership • AI innovation • Product thinking. 20+ years building scalable web solutions.

Hiring Engineers in the Age of AI | g3ddy