Skip to main content
Back to Insights
AI Safety10 min readDecember 2024

Vibe Coding Will Get Someone Killed

49 researchers just published a warning. The industry isn't listening.

In February 2025, Andrej Karpathy coined the term "vibe coding": describing a development style where you prompt AI to generate code, accept what it produces, and ship it without deeply understanding what you've built.

The term was meant to be playful. The practice is spreading like wildfire.

Three months later, 49 researchers from institutions across four continents published a manifesto warning that this approach (AI-generated code deployed without deliberate human oversight) is a disaster waiting to happen.

They're right.

And the disaster won't be a failed feature or a corrupted database. At the current trajectory, vibe coding will get someone killed.


The Security Research Is Damning

Let's start with what we already know.

The Databricks Red Team demonstrated how vibe coding led to a critical remote code execution vulnerability. The AI used Python's pickle module in an unsafe way, a well-known attack vector that any experienced developer would catch. The vibe coder didn't catch it because they didn't understand what they'd built.

Separate research has shown that AI-generated code samples frequently contain known vulnerabilities, including SQL injection, one of the oldest and most exploited attack vectors in existence.

These aren't edge cases. They're predictable outcomes of a development approach that explicitly bypasses review and understanding.

The SAFE-AI Manifesto, published by researchers from Virginia, MIT, Toronto, Singapore, and dozens of other institutions, puts it bluntly:

"While vibe coding democratizes programming and accelerates prototyping, recent studies have raised concerns about its reliability and security."

That's academic understatement for: this is dangerous and we're worried.


The System 1 Problem

The manifesto frames the issue through Daniel Kahneman's System 1 and System 2 framework.

System 1

Fast, intuitive, pattern-matching. It's how you catch a ball or recognise a face. It works brilliantly when the situation matches learned patterns.

System 2

Slow, deliberate, analytical. It's how you solve a novel problem, consider consequences, and make careful judgments.

AI operates like System 1. It's extraordinarily good at pattern matching: generating code that looks right based on patterns in its training data. It's fast. It's confident. And it has no idea whether what it's producing is actually appropriate for your specific context.

Vibe coding is what happens when System 1 runs unsupervised at scale.

The manifesto's authors warn:

"Accepting and proceeding with whatever code is suggested by AI amounts to letting System 1 take control. As a result, seemingly minor errors could lead to large financial losses and compliance violations. What is worse, if allowed to scale uncontrolled, AI-generated software has the capacity to cause great harm."

The capacity to cause great harm. That's not hyperbole. That's 49 researchers choosing their words carefully.


The Speed Trap

Here's the seductive logic of vibe coding:

"AI can generate code in seconds. Why would I spend hours understanding it? I'll just test it and ship it. Move fast."

This logic is a trap.

First: You can't test for what you don't understand

If you don't know how the code works, you don't know what edge cases to test. You're testing the happy path and hoping the sad paths don't exist.

Second: Security vulnerabilities often aren't visible in testing

SQL injection doesn't show up when you're testing with friendly inputs. Remote code execution doesn't manifest until someone crafts a malicious payload. By the time you discover the vulnerability, it's in production. Or worse, it's been exploited.

Third: Speed now creates slowness later

Every piece of code you don't understand is technical debt you can't pay down. When something breaks (and something will break) you'll be debugging a system that's opaque to you. The hours you saved generating code will be dwarfed by the days you spend fixing it.

The manifesto cites Sun Tzu:

"Strategy without tactics is the slowest route to victory. Tactics without strategy is the noise before defeat."

Vibe coding is all tactics. It's noise before defeat.


Where This Gets Lethal

"Okay, but we're talking about bugs and security issues. That's bad, but 'getting someone killed' is dramatic."

Is it?

Consider where AI-generated code is already being deployed:

Healthcare systems

AI is generating code for patient record systems, diagnostic tools, drug interaction checkers. A vulnerability in these systems doesn't just expose data, it can lead to wrong treatments, missed diagnoses, fatal drug interactions.

Autonomous vehicles

The software controlling braking, steering, and collision avoidance is increasingly AI-assisted in its development. A subtle bug in sensor interpretation or decision logic doesn't cause a crash report. It causes a crash.

Industrial control systems

Power grids, water treatment, manufacturing equipment. These systems are being modernised with software, often developed under time pressure, increasingly with AI assistance. A vulnerability here doesn't just take down a website. It can cause explosions, contamination, blackouts.

Financial systems

Not directly lethal, but a vulnerability that drains someone's life savings, cancels their insurance, or destroys their credit can have fatal downstream effects. People have died because of financial system failures.

The manifesto authors understand this:

"At the societal level, [aggressive release cycles] can lead to harm through failed systems."

They cite the CrowdStrike outage of 2024 as an example: a single faulty update that brought down systems worldwide. Now imagine that happening with code that was never understood by the humans who deployed it. Imagine it happening in a hospital. An airport. A nuclear facility.


The Deliberation Imperative

The manifesto proposes a framework they call SAFE-AI. The first principle is Strategic Deliberation over speed and scale.

"Strategic deliberation involves practicing slower System 2 thinking before leveraging AI's speed and scale. It safeguards against costly, if not catastrophic failures due to the careless use of the awesome powers of AI."

This isn't anti-AI. These researchers aren't Luddites. They're saying: use AI, but think first. Let AI handle implementation after humans have done the hard work of understanding the problem, defining requirements, considering impacts.

The manifesto proposes that humans focus on "critical requirements": the things that absolutely must be right. Safety. Security. Ethics. Core business rules. Let AI generate the rest, but humans must own what matters.

This is the opposite of vibe coding, where humans own nothing and AI generates everything.


The Accountability Vacuum

Here's another dimension of this problem: when something goes wrong with vibe-coded software, who's responsible?

The developer?

They didn't write the code. They might not even understand it.

The AI?

It's a tool. Tools don't go to prison.

The company?

They'll claim the developer was responsible for review. The developer will claim they were following accepted practice. Everyone will point at everyone else.

We're creating an accountability vacuum. Systems that no one understands, deployed by people who can't explain them, operated by organisations that disclaim responsibility.

When those systems fail (and they will fail) there will be no one to blame and no one to fix them.

The manifesto calls for "transparency and traceability of development processes." Vibe coding is the opposite: opacity and untraceability by design.


The Professional Obligation

If you're a developer, a product manager, a CTO, you have a professional obligation to understand what you deploy.

This isn't about being anti-AI or anti-progress. It's about basic professional responsibility.

Doctors don't prescribe drugs they don't understand because the pharmaceutical rep said they work. Engineers don't sign off on bridges they can't analyse because the software said it's fine. Lawyers don't file briefs they haven't read because AI generated them.

Why should software developers be different?

"In the age of powerful AI, human developers must understand the underlying problems that they are building software to solve. A simple modeling task may take a few extra minutes, but it forces a developer to think before acting."

Think before acting. It shouldn't be a radical proposition. But in the vibe coding era, it increasingly is.


What Responsible AI-Augmented Development Looks Like

I'm not arguing against using AI for development. I use AI constantly. It's transformative.

But I use it within a framework of understanding and oversight:

  • Understand the problem before generating solutions. What are we actually trying to accomplish? What are the constraints? What could go wrong? AI can help explore these questions, but humans must own the answers.
  • Review what AI generates. Not just "does it work" but "do I understand why it works." If you can't explain the code, you shouldn't deploy it.
  • Focus human attention on what matters. Security. Safety. Ethics. Core logic. Let AI handle boilerplate, but own the critical paths.
  • Test adversarially. Not just "does it handle good inputs" but "what happens with malicious inputs." If you don't understand the attack surface, hire someone who does.
  • Document decisions. Why did we build it this way? What alternatives did we consider? What risks did we accept? Future you (and future investigators) will want to know.
  • Accept that slower is sometimes faster. The time you spend understanding code is time you won't spend debugging mysterious failures or explaining security breaches.

This is essentially what the SAFE-AI manifesto proposes with their MADE process: Model, Agree, Develop, Evaluate. Think, align, build, verify.

It's not complicated. It's just disciplined.


The Coming Reckoning

Right now, vibe coding is in its honeymoon phase. People are shipping faster than ever. Everything seems to work. The skeptics seem like old people yelling at clouds.

This won't last.

The vulnerabilities being introduced today will be exploited tomorrow. The systems no one understands will fail in ways no one can fix. The accountability vacuum will claim its first high-profile victim.

And when that happens, when someone dies because of code that was generated without understanding, deployed without review, and failed without explanation, there will be a reckoning.

The question is whether you want to be on the right side of that reckoning.

The researchers who wrote the SAFE-AI manifesto have done their part. They've documented the risks. They've proposed alternatives. They've called for deliberation over speed.

Now it's on practitioners to listen.

Think before you vibe.

vibe codingAI safetysoftware developmentSAFE-AIsecurityprofessional responsibility
JL

Written by

Jason La Greca

Jason La Greca is the founder of Teachnology. He uses AI extensively for development, within a framework of understanding and responsibility. He believes AI should augment human capability, not replace human judgment.

The SAFE-AI Manifesto referenced in this article was authored by Lukyanenko, Samuel, Tegarden, Larsen, and 45 additional researchers from institutions worldwide.

Connect on LinkedIn

Ready to build AI capability responsibly?

Take the AI Readiness Assessment to understand your organisation's approach.

Start Assessment

Need guidance on responsible AI development?

Learn how Teachnology Advisory helps organisations build with understanding and oversight.

Explore Advisory