Skip to main content
Back to Insights
December 21, 202412 min readEducation

The FlightX Playbook: How to Build Your Own Disruption Team

A practical guide for universities and edtech companies that want to out-experiment their way to relevance.

I never miss a Diary of a CEO episode. Steven Bartlett is one of those rare figures who combines genuine business success with intellectual honesty about how he got there. No guru nonsense. No pretending he has all the answers. Just relentless experimentation and a willingness to share what he learns.

So when I discovered how he actually runs his business, it clicked. Bartlett doesn't run an innovation lab. He doesn't have a transformation office. He doesn't commission consultants to write strategy documents about the future of media.

He has a Head of Failure.

Grace Miller's job at Bartlett's Flight Story isn't to prevent failure. It's to increase the rate of failure. To run more measured experiments, faster, across every team. To apply scientific method to business decisions. To learn what works by systematically discovering what doesn't.

The results speak for themselves. A 10-second change to the podcast intro yielded a 300% increase in subscribers. AI-generated podcast episodes now match human-hosted episodes in retention metrics. The Diary of a CEO grew into one of the world's largest podcasts while traditional media declined.

Higher education and edtech are facing the same disruption that hit media. The same playbook applies. But almost nobody in the sector is running it.

Here's how to build your own disruption team, adapted specifically for universities and education technology companies.

The Core Philosophy: Out-Fail the Competition

Before the tactics, understand the philosophy.

What I admire most about Bartlett is that he's not selling certainty. He's honest about not knowing what will work. His edge isn't better predictions. It's faster learning.

His approach can be summarised in one line: "The greatest companies are not great because they've had great ideas. They're great because they out-fail their competition."

This inverts how most educational institutions think about innovation. The traditional approach is to minimise failure. Plan extensively. Seek consensus. Pilot cautiously. Scale only when certain.

The FlightX approach is to maximise learning by maximising experiments. Most experiments fail. That's the point. Each failure teaches something. The organisation that runs 100 experiments and has 90 fail learns more than the organisation that runs 10 experiments and has 2 fail.

For higher education, this is a fundamental mindset shift. Academic culture celebrates being right. Publications are peer-reviewed to ensure accuracy. Courses are refined over years. Failure is career-limiting.

But the environment has changed. AI capabilities are evolving monthly. Student expectations are shifting quarterly. The careful, deliberate approach that served universities for centuries is now a competitive disadvantage.

You can't out-think disruption. You can only out-experiment it.

Role 1: The Head of Failure

Every disruption team needs someone whose explicit job is to increase experiment velocity. At Flight Story, that's Grace Miller, Head of Failure and Experimentation.

What the role does:

The Head of Failure doesn't run experiments themselves. They enable experiments across the entire organisation. When someone has a question, the Head of Failure transforms it into a testable hypothesis. They design controlled experiments with measurable variables. They ensure experiments actually get run, measured, and learned from.

Bartlett describes it: "Her job is to make all of our teams fail more often by making sure experiments are measurable. Not just changing things and calling it an experiment."

What this looks like in higher education:

A university Head of Failure would work across faculties, not within one. They'd take questions like "Would students engage more with AI-augmented tutorials?" and turn them into actual experiments. Control group, treatment group, defined metrics, timebound test.

They'd maintain an experiment backlog. They'd run weekly experiment reviews. They'd celebrate failures publicly, extracting and sharing the learning. They'd build the institutional muscle memory of experimentation.

What this looks like in edtech:

An edtech Head of Failure would work across product, content, and growth teams. They'd take questions like "Would gamification improve completion rates?" and turn them into rapid tests. Not months of development followed by a launch. A quick prototype, a small cohort, a measured result.

They'd ensure the company is always running multiple experiments simultaneously. They'd prevent the "big bet" mentality where everything rides on one product launch.

How to hire for this:

Look for someone with:

  • Scientific method training (doesn't have to be a scientist, but needs the mindset)
  • Cross-functional credibility (can work with academics, technologists, administrators)
  • Comfort with ambiguity (most experiments won't have clear outcomes)
  • Communication skills (needs to make failure visible and celebrated)
  • Political savvy (will face resistance from risk-averse culture)

The title matters. "Head of Failure" is provocative by design. It signals that failure is not just tolerated but expected. If your organisation can't stomach the title, you might not be ready for the role.

Role 2: The Data Scientist

You can't learn from experiments you don't measure. Flight Story employs a full-time Data Scientist, Charles Kakou, working alongside the Head of Failure.

What the role does:

The Data Scientist ensures experiments are properly instrumented. They define metrics before experiments run. They analyse results with statistical rigour. They distinguish signal from noise. They build dashboards that make experiment results visible to everyone.

What this looks like in higher education:

Most universities have institutional research teams, but they're typically focused on compliance reporting and historical analysis. A disruption team needs forward-looking measurement capability.

What's the baseline completion rate for this course? What's the statistical significance threshold for declaring an experiment successful? How do we control for cohort differences? These questions need rigorous answers.

The Data Scientist would also build the infrastructure for rapid measurement. If it takes three months to get data on an experiment, you won't run many experiments. The goal is measurement in days, not semesters.

What this looks like in edtech:

Edtech companies often have analytics, but it's focused on product metrics: DAU, retention, conversion. A disruption team needs experimental analytics: hypothesis validation, A/B test rigour, learning extraction.

The Data Scientist ensures you're not fooling yourself. It's easy to see patterns in noise. It's easy to declare victory prematurely. Rigorous measurement keeps the team honest.

How to hire for this:

Look for someone with:

  • Statistical training (can design experiments properly, understands significance)
  • Data engineering capability (can build measurement infrastructure, not just analyse)
  • Communication skills (can explain findings to non-technical stakeholders)
  • Speed orientation (comfortable with "good enough" measurement rather than perfect)
  • Educational domain knowledge (understands what metrics actually matter for learning)

Role 3: The Builder

Experiments require things to test. Someone needs to build prototypes, mockups, MVPs. Fast.

What the role does:

The Builder creates testable artefacts quickly. Not production-quality products. Functional prototypes that let you learn whether an idea has merit. They optimise for speed over polish. They're comfortable throwing away work when experiments fail.

In the AI era, this role has transformed. A skilled Builder with AI assistance can create in days what used to take weeks. Prototypes that would have required a team can be produced by one person.

What this looks like in higher education:

A university Builder might create:

  • A prototype AI tutor for a specific course module
  • A mockup of an alternative credentialing system
  • A functional pilot of a new learning experience
  • A test version of an industry partnership model

The key is speed. The Builder isn't creating the final product. They're creating something good enough to test the hypothesis.

What this looks like in edtech:

An edtech Builder might create:

  • A rapid prototype of a new feature
  • An alternative onboarding flow
  • A test version of a pricing model
  • A mockup of a partnership integration

Again, speed over polish. The goal is learning, not launching.

How to hire for this:

Look for someone with:

  • Full-stack capability (can build end-to-end without dependencies)
  • AI-augmented workflow (uses AI to accelerate, not just assist)
  • Comfort with imperfection (happy to ship rough work that enables learning)
  • Domain knowledge (understands education well enough to build meaningful experiments)
  • Low ego (comfortable having most of their work thrown away)

The Operating Model

Having the right people isn't enough. You need the right operating model.

Weekly experiment cadence:

At Flight Story, even the social media team reports weekly on experiments run in the last seven days. This cadence forces action. If you're not running experiments, you have nothing to report.

For a university disruption team, this might mean:

  • Monday: Review last week's experiment results
  • Tuesday-Thursday: Run current experiments, prepare new ones
  • Friday: Document learnings, update experiment backlog

For an edtech disruption team, the cadence might be even faster. Daily standups focused on what's being tested, what's being learned.

Hypothesis-driven experiments:

Not all change is experimentation. Bartlett emphasises: "Not just changing things and calling it an experiment."

Real experiments have:

  • A clear hypothesis ("We believe X will cause Y")
  • Defined metrics ("We'll measure Y by looking at Z")
  • Control conditions ("We'll compare against baseline/control group")
  • Success criteria ("The experiment succeeds if Z improves by N%")
  • Time bounds ("We'll run for two weeks then evaluate")

This rigour is essential. Without it, you're just making changes and hoping for the best.

Kill criteria:

Some experiments succeed and should be scaled. Some fail and should be killed. Some are inconclusive and need refinement.

The disruption team needs clear kill criteria. If an experiment doesn't show promise within the defined timeframe, it dies. No extending timelines. No moving goalposts. No zombie experiments consuming resources.

This is culturally hard in higher education, where initiatives tend to persist indefinitely. The disruption team models a different approach: try fast, learn fast, kill fast.

Learning extraction:

Failed experiments are only valuable if you extract the learning. Every completed experiment should produce:

  • A clear result (succeeded/failed/inconclusive)
  • An explanation of why
  • Implications for future experiments
  • Recommendations for the broader organisation

Flight Story shares these learnings publicly within the company. The disruption team should do the same. Failure becomes a contribution, not a shame.

Protecting the Team

Disruption teams face organisational antibodies. The existing culture will try to slow them down, constrain them, absorb them into normal processes.

Executive sponsorship:

The team needs protection from the top. At Flight Story, Bartlett himself sponsors the experimentation culture. He's said he couldn't have a failure and experimentation team in a traditional corporate structure because "I'd have to explain myself to people."

In a university, this means Vice-Chancellor or Provost sponsorship. Someone who can shield the team from governance requirements that would kill experiment velocity.

In an edtech company, this means CEO or founder sponsorship. Someone who can protect the team from pressure to ship features instead of run experiments.

Ring-fenced resources:

The team needs dedicated budget that doesn't compete with operational priorities. If every experiment requires budget approval, you won't run experiments.

This is hard in higher education, where budgets are historically allocated and new initiatives compete for marginal resources. The disruption team needs protected funding.

Governance exemptions:

Normal governance processes are designed for normal operations. They're not designed for rapid experimentation. The disruption team needs exemptions.

This might mean:

  • Ethics approval fast-track for bounded educational experiments
  • Curriculum committee bypass for pilot modules
  • Procurement exemption for small-scale tool testing
  • HR flexibility for hiring experimental roles

Without these exemptions, the team will be governed into irrelevance.

Physical and psychological separation:

Some disruption teams benefit from physical separation. Not isolation, but enough distance from the mothership to develop their own culture.

More important is psychological separation. The team needs permission to think differently, act differently, fail differently. They can't be held to the same success metrics as operational teams.

What to Experiment On

The playbook provides a structure. But what should higher education and edtech actually experiment on?

For universities:

  • AI-augmented instruction models (what's the right human/AI balance?)
  • Alternative credentialing (do micro-credentials, portfolios, or competency demonstrations work?)
  • Flexible curriculum structures (can students design their own pathways?)
  • Industry integration (embedded programs, real-world projects, employer partnerships)
  • Research dissemination (can AI make research accessible and actionable?)
  • Business model variations (subscription, outcome-based, employer-funded)

For edtech:

  • AI tutoring approaches (what level of AI autonomy works best?)
  • Engagement mechanisms (gamification, social learning, accountability)
  • Content formats (video, interactive, AI-generated, user-generated)
  • Pricing models (subscription, freemium, outcome-based, B2B)
  • Distribution channels (direct, partnership, platform)
  • Retention interventions (what actually prevents dropout?)

The specific experiments matter less than the velocity. Run many. Learn fast. Scale what works. Kill what doesn't.

Starting Tomorrow

You don't need to build a full disruption team to start. You can begin with the mindset.

This week:

  • Identify one question you've been debating but not testing
  • Turn it into a hypothesis with measurable outcomes
  • Design a bounded experiment you could run in two weeks
  • Run it

This month:

  • Designate someone as experiment coordinator (even part-time)
  • Create an experiment backlog of questions worth testing
  • Run three experiments
  • Review results and extract learnings

This quarter:

  • Make the case for a dedicated disruption role
  • Ring-fence budget for experimentation
  • Establish weekly experiment cadence
  • Celebrate your first public failure

This year:

  • Build the full team (Head of Failure, Data Scientist, Builder)
  • Run 50+ experiments
  • Document and share learnings systematically
  • Measure how experiment velocity correlates with outcomes

The Cost of Not Doing This

Bartlett's prediction for media applies equally to education: disruption is coming regardless. The only question is whether you're disrupting yourself or being disrupted by others.

AI-generated educational content is coming. Personalised learning at scale is coming. Alternative credentials are coming. The institutions that figure out what works will thrive. The institutions that wait to see what others learn will fall behind.

I've listened to hundreds of hours of Bartlett interviewing the world's most successful people. The pattern is consistent: they didn't wait for certainty. They experimented relentlessly. They failed more than their competitors. They learned faster.

Flight Story's approach gave them a decisive advantage in media. The same approach can give universities and edtech companies a decisive advantage in education.

But only if they start experimenting. Now.

The playbook is here. The only question is whether you'll run it.

Jason La Greca

Jason La Greca is the founder of Teachnology and works in educational technology at a major Australian university. He's spent twenty years watching educational institutions debate changes that could have been tested in weeks. Teachnology helps education organisations build disruption capability.

Ready to Build Your Own Disruption Team?

Take the AI Readiness Assessment or explore Teachnology Advisory to start building the experimentation capability that drives relevance.