There's a popular metaphor circulating in AI development circles: AI is like a sous-chef, and you're the executive chef.
The sous-chef handles the prep work. Chops the vegetables. Reduces the sauces. Executes the techniques. But the executive chef designs the menu, tastes for quality, and takes responsibility for what leaves the kitchen.
It's a comforting metaphor. It suggests humans remain in control. AI does the grunt work; we do the thinking. The hierarchy is clear. The accountability is clear. Everything is fine.
I've worked in professional kitchens. So has my wife. Both of us during our undergraduate years: her in a Japanese Izakaya in Tokyo, me in high-volume catering/Italian restaurant (again in Tokyo). We know what kitchens actually look like when they're functioning. We know the hierarchy, the accountability, the pace, the pressure. To this day, we often cater for large groups of friends, it is our passion.
The sous-chef metaphor isn't just wrong. It's an insult to actual kitchens.
The metaphor is a lie.
Not because it's wrong about what should happen. But because it's catastrophically wrong about what is happening.
What's Actually Happening in the Kitchen
Let me describe what I see in most organisations using AI for development:
The "executive chef" prompts the AI: "Build me a customer portal."
The AI generates code. The executive chef glances at it. Maybe runs it to see if it works. Ships it.
That's not an executive chef directing a sous-chef. That's a restaurant owner who wandered into the kitchen, asked "what's for dinner?" and served whatever came out of the oven.
The sous-chef didn't just prep ingredients. The sous-chef:
- • Decided what to cook
- • Chose the ingredients
- • Determined the techniques
- • Plated the dish
- • And the "executive chef" just... approved it
This isn't collaboration. This is abdication wearing the costume of oversight.
The Manifesto's Warning
The SAFE-AI Manifesto, signed by 49 researchers, uses this exact metaphor, and immediately undermines it:
"An emerging consensus treats AI as a coding hyper-assistant, a sort of sous-chef, following the lead of a human executive chef."
Note the framing: "emerging consensus." Not "how it works." Not "best practice." Just what people have agreed to believe.
The manifesto then spends thirty pages explaining why this consensus is dangerous. Why AI operating without genuine human oversight leads to vulnerabilities, failures, and harm. Why "following the lead" requires a lead that most humans aren't actually providing.
The metaphor assumes the executive chef is doing executive chef things:
- • Designing the menu (defining requirements)
- • Sourcing ingredients (understanding the domain)
- • Tasting everything (reviewing outputs critically)
- • Training the sous-chef (refining AI behaviour)
- • Taking responsibility (owning outcomes)
How many AI-assisted developers are doing any of these things?
The Tasting Problem
Here's the crux of it: an executive chef tastes everything.
I remember watching the head chef at the catering company I worked for. Every sauce, every protein, every component: he'd taste it before it went out. Not occasionally. Every time. He'd adjust seasoning on the fly, reject dishes that weren't right, catch problems before they reached customers. It was relentless.
They don't just look at the plate and say "yep, that's food." They taste it. They know if the seasoning is off, if the sauce broke, if something is slightly wrong that will ruin the dish for the customer.
They can taste because they've spent years developing their palate. They understand flavour at a deep level. They know what good tastes like.
Most people using AI for development can't taste the code.
They can check if it runs. They can see if it produces the expected output for the expected input. But they can't taste it. They can't tell if the architecture is sound. If the security is adequate. If the edge cases are handled. If the approach is elegant or a disaster waiting to happen.
You can't be an executive chef if you can't taste the food. You're just a customer who wandered into the kitchen.
The Accountability Inversion
In a real kitchen, the hierarchy is clear. The executive chef is responsible for everything that leaves the kitchen. If a dish makes someone sick, the executive chef answers for it. Not the sous-chef.
I saw this accountability play out in real time. When something went wrong (a dish sent back, a timing failure, an ingredient shortage) it rolled uphill, not downhill. The person at the top owned it, fixed it, and made sure it didn't happen again. That's what leadership in a kitchen looks like.
In AI-assisted development, we've inverted this.
When AI-generated code fails (when it has vulnerabilities, when it breaks in production, when it causes harm) who's responsible?
The developer says:
"I didn't write that code. The AI did."
The organisation says:
"We trusted our developer to review it."
The AI vendor says:
"We're not responsible for how our tool is used."
Everyone points at everyone else. The sous-chef takes the blame for the executive chef's failure to actually be an executive chef.
"We're creating an accountability vacuum. Systems that no one understands, deployed by people who can't explain them, operated by organisations that disclaim responsibility."
The sous-chef metaphor obscures this vacuum. It makes it sound like someone's in charge. But the metaphor only works if the executive chef is actually doing their job.
What Executive Cheffing Actually Requires
Let me be specific about what genuine oversight of AI-generated code requires:
Understanding the problem space
Before you prompt AI, you need to understand what you're trying to accomplish. Not "build me a portal" but "here's the user journey, here's the data model, here's the security requirements, here's what success looks like, here's what failure looks like."
An executive chef doesn't say "make me dinner." They say "we're doing a seven-course tasting menu for guests with shellfish allergies, focused on spring vegetables, and we need to accommodate a vegan."
Reviewing outputs critically
Not "does it run" but "is this the right approach? Are there better alternatives? What are the trade-offs? What could go wrong?"
This requires enough technical knowledge to evaluate the code. If you can't read it, you can't review it. If you can't review it, you're not the executive chef.
Understanding failure modes
What happens with unexpected inputs? Malicious inputs? Edge cases? High load? Network failures? An executive chef knows that the soufflé might fall, that the fish might be off, that the timing might slip. They plan for these things.
Taking genuine responsibility
If it fails, it's your failure. Not the AI's failure. Yours. Because you chose to deploy it. You signed off on it. Your name is on the dish.
If you're not willing to take responsibility for AI-generated code, you shouldn't be deploying AI-generated code.
The Skill Gap Problem
Here's the uncomfortable reality: most people using AI for development don't have the skills to be executive chefs.
This isn't an insult. It's a structural problem.
For years, we've hired and trained developers primarily for implementation skills. Can you write code? Can you debug? Can you work in this framework?
Those are sous-chef skills. Important skills. But not executive chef skills.
Sous-chef skills:
- • Can you write code?
- • Can you debug?
- • Can you work in this framework?
Executive chef skills:
- • Systems thinking
- • Requirements analysis
- • Security mindset
- • Architecture judgment
- • Risk assessment
- • Ethical reasoning
These skills were "someone else's job": architects, analysts, security specialists. Now that AI handles implementation, these skills are the job. And most developers don't have them.
"Students should learn not only to create models but also to evaluate, critique, and refine AI-generated artifacts using modeling as a form of reasoning and oversight."
Evaluate. Critique. Refine. Oversight.
These are executive chef skills. We're not teaching them. We're sending sous-chefs into the kitchen and calling them executive chefs because they're the only ones there.
The Dunning-Kruger Kitchen
There's a particular failure mode I see constantly: people who think they're executive chefs because they don't know what they don't know.
They prompt AI. They get code. It runs. It seems to work. They ship it.
They think they've reviewed it. They think they've exercised judgment. They think they've done due diligence.
But they didn't catch:
- • The SQL injection vulnerability because they don't know what SQL injection looks like in code
- • The race condition because they've never debugged one
- • The architecture problem because they've never built a system that scaled
They're not bad people. They're not lazy. They're just operating outside their competence without realising it.
This is Dunning-Kruger in the kitchen. People who genuinely believe they're executive chefs because they don't know what executive chefs actually do.
AI makes this worse. It's so good at generating plausible-looking code that it creates a false sense of competence. "I made this work!" No, AI made it work. You approved it without understanding it.
The Honest Alternative
Let me propose a more honest metaphor: AI is a private chef that speaks a foreign language.
You can describe what you want to eat in broad terms. The chef will prepare something. It might be delicious. It might be poisonous. You can't tell by looking because you don't read the language on the ingredient labels.
You can taste the result. You can tell if you like it. But you can't tell if it's safe until after you've eaten it.
This is closer to what's actually happening. AI generates code. You can see if it runs. You can't see if it's secure, reliable, or appropriate unless you read code, and most people prompting AI don't read code well enough to catch subtle problems.
In this metaphor, the options are:
1. Learn the language (develop technical skills to review AI output)
2. Hire a translator (bring in experts who can review)
3. Accept the risk (acknowledge you're shipping code you don't understand)
4. Don't use the chef (build capability another way)
All of these are honest. What's not honest is pretending you're an executive chef when you can't taste the food.
Becoming a Real Executive Chef
If you want to use AI responsibly, if you want to actually be the executive chef, here's what that requires:
- →Invest in understanding. Before you prompt, understand the domain. Understand the requirements. Understand what good looks like. If you can't describe what you want in detail, you're not ready to prompt.
- →Develop review capability. Either develop it yourself or bring in people who have it. Someone needs to be able to read and evaluate the code AI generates. If no one on your team can do this, you're not ready to ship AI-generated code.
- →Create review processes. Don't just glance at AI output and approve it. Have structured reviews. Check for security. Check for edge cases. Test adversarially. Document what you checked and why.
- →Accept responsibility. If you ship it, you own it. Period. Don't hide behind "the AI generated it." You deployed it. It's yours.
- →Know your limits. There's nothing wrong with saying "I'm not qualified to review this." That's honesty. What's wrong is pretending you reviewed something you couldn't actually evaluate.
The Coming Kitchen Disasters
Right now, kitchens everywhere are full of self-appointed executive chefs who can't taste the food.
They're shipping code they don't understand. They're deploying systems they can't evaluate. They're taking on liability they can't manage.
Some of these dishes will poison customers. Some already have: the security breaches, the system failures, the vulnerabilities exploited.
As AI gets more powerful (as it generates more complex code, as it takes on more autonomous decision-making) the gap between what's being deployed and what's being understood will widen.
We'll have more executive chefs who've never cooked, running kitchens they don't understand, serving dishes they can't taste.
The metaphor is comforting. Reality is coming.
The Real Choice
You have two options:
Option 1: Become a real executive chef
Invest in the skills, processes, and capabilities required to genuinely oversee AI. Take the time to understand what you're building and why. Accept responsibility for outcomes.
Option 2: Stop pretending
Acknowledge that you're using AI without genuine oversight. Accept the risks. Be honest about what you don't know.
What you can't do is pretend that prompting AI and approving outputs constitutes meaningful oversight. That's not executive cheffing. That's liability with extra steps.
The sous-chef metaphor is seductive because it lets us feel in control without being in control. It lets us claim the status of oversight without doing the work of oversight.
But the kitchen doesn't care about metaphors. Eventually, the dishes come out. Eventually, someone tastes them.
And if you're the executive chef in name only,
you'll discover what you've been serving.
Written by
Jason La Greca
Jason La Greca is the founder of Teachnology. He worked in professional kitchens during his undergraduate years, long enough to know what real accountability looks like when the dinner rush hits. He's since watched too many self-appointed executive chefs burn down software kitchens they never understood. Teachnology helps organisations develop the genuine capability to oversee what they build.
The SAFE-AI Manifesto referenced in this article was authored by Lukyanenko, Samuel, Tegarden, Larsen, and 45 additional researchers.
Connect on LinkedInReady to become a real executive chef?
Take the AI Readiness Assessment to understand your oversight capabilities.
Start AssessmentNeed help building genuine oversight?
Learn how Teachnology Advisory helps organisations develop real accountability frameworks.
Explore Advisory