In 1935, a plane crash killed two pilots because the aircraft was "too complicated for one person to fly." The solution wasn't to make planes simpler. It was to build systems that made complexity manageable. AI is at the same inflection point.
I need to confess something before we start: I'm a massive aviation nerd.
Behind me as I write this is a shelf of fighter jet magazines I've collected since childhood. F-14 Tomcats. F-15 Eagles. Tornado GR1s. I can still tell you the thrust-to-weight ratio of aircraft I read about when I was twelve. Medical reasons meant I never got my pilot's licence, which remains one of my life's genuine disappointments. These days I fly in Microsoft Flight Simulator 2024 in VR, threading a Cessna through mountain passes or attempting (and frequently failing) carrier landings in an F/A-18.
I don't apologise for the nerdiness of this article. If anything, it's been decades in the making.
Because aviation isn't just a hobby for me. It's a masterclass in how humans learn to manage complexity safely. And right now, as organisations rush to adopt AI, they're ignoring almost everything aviation learnt the hard way.
Let me tell you a story.
The Crash That Changed Everything
On October 30, 1935, the most advanced aircraft ever built crashed on takeoff at Wright Field, Ohio.
The Boeing Model 299 was a marvel. Four engines instead of the standard two. A 103-foot wingspan. Retractable landing gear. Controllable pitch propellers. More switches, gauges, and controls than any aircraft before it.
Major Ployer P. Hill, one of the Army Air Corps' most experienced test pilots, was at the controls. He had tested dozens of aircraft. This should have been routine.
The plane lifted off, climbed to 300 feet, stalled, and crashed in a fireball. Hill and Boeing's chief test pilot Leslie Tower were killed. The investigation found the cause: Hill had forgotten to release the gust lock, a mechanism that secured the control surfaces whilst the plane was parked.
The conclusion seemed obvious. One newspaper declared the Model 299 "too much aeroplane for one man to fly." Critics said it was too complex to be safe. Boeing nearly went bankrupt. The Army awarded the contract to a simpler, inferior aircraft.
But a small group of test pilots saw it differently.
The problem wasn't that the plane was too complex. The problem was that they were relying on human memory to manage that complexity. Even the best pilot couldn't reliably remember every critical step when operating a machine this sophisticated.
Their solution was remarkably simple: a checklist. A single index card listing every action required for takeoff, flight, and landing. Release the brakes. Close all doors and windows. Unlock the elevator controls. Check the fuel mixture. Every step, in order, every time.
With the checklist, pilots flew the Model 299 for 1.8 million miles without incident. The Army eventually ordered almost 13,000 of them. Renamed the B-17 Flying Fortress, it became one of the most important aircraft of World War II.
That checklist didn't just save Boeing. It transformed aviation.
The Transformation Nobody Expected
In the 1950s, commercial aviation was terrifyingly dangerous.
The accident rate was approximately 27 crashes per million departures. If you flew regularly, the odds of eventually dying in a plane crash were not insignificant. In that decade, an average crash killed 80% of the passengers aboard.
Today, commercial aviation is one of the safest forms of transport ever created.
The fatal accident rate is approximately 0.2 per million flights. That's one fatal accident for every 5 million flights. In 2017, commercial aviation recorded its safest year in history, with only 0.11 hull losses per million flights. In 2023, it was even safer.
The chance of dying on a commercial flight is now estimated at 1 in 29 million. You're more likely to be killed by lightning.
How did aviation achieve this? Not through a single breakthrough. Through the systematic development of structures, processes, and cultures that made safe operations possible even as aircraft became exponentially more complex.
The story of how aviation went from deadly to safe is the playbook every organisation needs as they adopt AI.
The Five Pillars of Aviation Safety
When you look at how aviation transformed itself, five interconnected elements emerge:
1. Checklists and Standard Operating Procedures
The humble checklist, born from the Model 299 crash, became the foundation of aviation safety.
It seems almost absurdly simple. Write down what needs to happen. Do those things in order. Check them off. But this simple innovation addressed a fundamental truth about human cognition: our memories are unreliable, especially under pressure, especially with complex systems.
Checklists don't replace expertise. They augment it. The best pilots in the world still use checklists because they know that expertise doesn't make you immune to forgetting critical steps. It makes you aware of how easy it is to forget.
Today, commercial aircraft have checklists for every phase of flight: preflight, taxi, takeoff, climb, cruise, descent, approach, landing, post-landing. There are checklists for normal operations and checklists for emergencies. There are checklists for when checklists fail.
This systematisation didn't slow aviation down. It sped it up. When everyone follows the same procedures, handoffs are seamless, training is efficient, and errors are caught before they cascade.
2. Crew Resource Management (CRM)
In 1977, two Boeing 747s collided on a foggy runway in Tenerife, killing 583 people. It remains the deadliest accident in aviation history.
The investigation revealed something disturbing: the KLM first officer had concerns about taking off. He expressed them hesitantly. The captain, one of KLM's most senior pilots, dismissed them and proceeded. The first officer didn't push back. Everyone died.
This wasn't a technical failure. It was a cultural failure. The rigid hierarchy of the cockpit meant junior crew members didn't challenge senior ones, even when safety was at stake.
The response was Crew Resource Management, which fundamentally reimagined how cockpits operated. The core principles:
Anyone can speak up. Junior crew members are not just permitted but required to voice safety concerns. The phrase "I'm concerned about..." became standard.
Flat hierarchy for safety. Whilst the captain remains in command, safety concerns create temporary equality. A first officer questioning a captain's decision isn't insubordination. It's protocol.
Closed-loop communication. Instructions are repeated back to confirm understanding. "Descend to 3,000 feet." "Descending to 3,000 feet." No assumptions.
Shared situational awareness. The whole crew maintains awareness of the whole situation, not just their individual tasks. Everyone knows what everyone is doing.
CRM didn't weaken authority. It strengthened safety. Captains still make final decisions. But they make them with better information, because people aren't afraid to share what they see.
3. Confidential Incident Reporting (ASRS)
In 1976, NASA and the FAA created the Aviation Safety Reporting System (ASRS), one of the most important innovations in safety history.
The insight was simple but revolutionary: near-misses contain the same lessons as accidents, but without the body count. If you could get people to report near-misses, you could learn from mistakes before they became fatal.
The problem was fear. Pilots and controllers didn't report incidents because they feared punishment. Admitting you almost caused an accident could end your career. So incidents went unreported, and the lessons they contained were lost.
ASRS solved this with three guarantees:
Confidentiality. Reports are de-identified. Names, airlines, and identifying details are stripped before analysis. You can't be tracked from your report.
Immunity. If you report an inadvertent violation within 10 days, that report can't be used against you for enforcement action. You're protected for coming forward.
Learning focus. The goal isn't to punish. It's to learn. Reports are analysed for patterns, and insights are shared across the entire industry through newsletters and alerts.
The results speak for themselves. Since 1976, ASRS has received over 1.6 million reports. These reports have identified countless hazards before they caused accidents. The lessons learnt are shared globally, so a near-miss in Denver can prevent a crash in Dubai.
This system only works because people trust it. Break that trust once, and reporting stops. So the confidentiality is sacred.
4. Global Knowledge Sharing
Aviation's safety culture extends beyond individual organisations. There's an entire ecosystem for sharing what works and what doesn't.
Accident investigations are public. When a plane crashes, investigators publish detailed reports explaining exactly what happened and why. No hiding, no spin. These reports become teaching materials for the entire industry.
Manufacturers issue bulletins. When Boeing or Airbus discovers a potential issue, they issue service bulletins to every operator worldwide. A problem found in one aircraft is fixed in all of them.
Regulatory bodies coordinate. The FAA, EASA, and other regulators share findings and harmonise standards. Safety improvements don't stop at national borders.
Training incorporates lessons learnt. Simulator scenarios are built from real accidents. Pilots practise handling the exact situations that have killed people before. The dead teach the living.
This radical transparency seems counterintuitive. Why would airlines share information that might embarrass them? Because everyone benefits when the whole industry gets safer. A crash anywhere hurts confidence everywhere. Airlines compete on many things, but they cooperate on safety.
5. Continuous Improvement and Regulation
Aviation didn't achieve safety and stop. It built systems for continuous improvement.
Flight data monitoring. Modern aircraft record hundreds of parameters continuously. These data are analysed to identify trends before they become problems. A pilot who consistently flies approaches too fast gets additional training, not after an incident, but before one.
Regular audits. Airlines undergo constant safety audits from regulators and industry bodies. These aren't gotcha exercises. They're opportunities to identify and fix issues.
Training never ends. Pilots don't just qualify once. They requalify regularly. They practise emergency procedures in simulators. They study accidents and near-misses. Learning is continuous.
Regulations evolve. When new hazards emerge, regulations change. After accidents revealed problems with automation dependence, training requirements changed. After runway incursions increased, procedures changed. The system adapts.
The AI Parallel
Now consider where most organisations are with AI adoption.
No checklists. People use AI tools however they want. There are no standard procedures for validating outputs, no consistent processes for different use cases, no documented steps for when things go wrong.
No CRM equivalent. Junior team members don't feel safe questioning AI-generated outputs that seem wrong. There's no culture of "I'm concerned about this result." Authority flows in one direction, and AI is treated as authoritative.
No incident reporting. When AI produces harmful or incorrect outputs, those incidents aren't systematically captured. There's no confidential system for reporting near-misses. Lessons are lost.
No knowledge sharing. Organisations don't share what they're learning about AI risks and failures. Every company is learning the same lessons from scratch, often the hard way.
No continuous improvement. There's no systematic process for learning from AI incidents and updating procedures. The same mistakes happen repeatedly.
Aviation in 1935 was exactly where AI is now: a powerful new technology being used by skilled people who are relying on intuition and expertise rather than systematic safety structures.
The Model 299 was too complex for human memory. AI systems are too. The question is whether we'll build the structures to manage that complexity, or wait for the equivalent of 583 people dying on a foggy runway.
What This Means for Your Organisation
This isn't just an analogy. It's a roadmap.
If you're adopting AI at scale, you need the equivalent of aviation's safety infrastructure. Not because regulators require it (though they will eventually). Because it's how you capture the benefits of AI without the catastrophic failures.
You need AI checklists. Standard procedures for different types of AI use cases. What validation steps are required before acting on AI output? What documentation is needed? What review processes apply? Not guidelines. Checklists.
You need AI-specific CRM. A culture where anyone can flag concerns about AI outputs without fear. Where "I'm not sure this is right" is encouraged, not punished. Where the person using the AI isn't the only person checking the AI.
You need incident reporting. A confidential, non-punitive system for capturing AI failures and near-misses. What went wrong? Why? What would prevent it next time? These reports become your learning database.
You need knowledge sharing. Ways to share lessons across teams and, ideally, across organisations. The AI mistake your marketing team makes shouldn't have to be relearnt by your legal team.
You need continuous improvement. Regular reviews of AI incidents. Updates to procedures based on what you learn. Training that incorporates real failures. A system that gets better over time, not one that repeats the same mistakes.
This Is What Teachnology Does
This is exactly where Teachnology helps.
We help organisations build the structures that make safe AI adoption possible at scale. The playbooks. The processes. The cultural frameworks. The incident learning systems. The training programmes.
Not generic "AI governance" that lives in a PDF nobody reads. Practical, operational infrastructure that actually works. The equivalent of aviation's checklists and CRM and ASRS, adapted for AI in your specific context.
Aviation didn't get safe by accident. It got safe by building systematic approaches to managing complexity. It got safe by learning from failures instead of hiding them. It got safe by creating cultures where safety concerns could be raised without career risk.
AI adoption at scale requires the same infrastructure. The question is whether you'll build it proactively, or learn the lessons the way aviation did in the 1970s: through preventable tragedies that force change.
The Model 299 was too complex for human memory. AI is too. The checklist was a simple innovation that saved thousands of lives.
What's your checklist?
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedIn