Steven Bartlett, the entrepreneur behind The Diary of a CEO, made one of his best hiring decisions when he appointed a "Head of Failure and Experimentation." Her job isn't to prevent failure. It's to increase the rate of failure. To run more experiments, faster, and learn from what doesn't work before competitors do.
His philosophy is simple: "The greatest companies are not great because they've had great ideas. They're great because they out-fail their competition."
Australian universities are about to learn what happens when you're out-failed. Not by each other. By technology that doesn't care about your governance structures, your academic calendars, or your strategic planning cycles.
The question isn't whether universities should experiment more. It's whether they'll exist in recognisable form if they don't.
The Existential Threat Nobody Wants to Name
Let me be direct about what's coming.
Agentic AI systems can already perceive, reason, and act with minimal human oversight. They query databases, synthesise information, and deliver customised learning experiences without human mediation. They perform at "above-average college graduate" levels across multiple domains. And they're improving on timescales measured in months, not years.
The traditional university value proposition goes something like this: we have the experts, the credentials, the structured curriculum, and the validation mechanisms. You pay us, attend for three or four years, and we certify that you know things.
Every element of that proposition is under attack.
The experts? AI can access and synthesise the knowledge of every expert, instantly, for free. A student can now interrogate the sum total of human knowledge on any topic without booking office hours.
The credentials? Employers are already questioning whether degrees predict job performance. When AI can assess actual capability directly, the signalling value of credentials diminishes.
The structured curriculum? AI can create personalised learning paths that adapt in real-time to individual needs. The industrial-era model of everyone learning the same things in the same sequence looks increasingly absurd.
The validation? When AI can demonstrate competence more reliably than exam performance, what exactly are universities validating?
I'm not predicting the death of universities. I'm observing that their current model faces existential pressure, and most institutions are responding with the urgency of a committee that meets quarterly.
The Australian Response: Principles Without Action
How have Australian universities responded to this threat?
The Group of Eight produced high-level principles. Universities Australia issued statements. Individual institutions formed working groups, appointed AI leads, and published guidelines about academic integrity.
All of this is reasonable. None of it is sufficient.
Here's the gap: 71% of Australian university staff report using AI tools. But the usage is superficial. Administrative tasks. Email drafting. Basic research assistance. This isn't transformation. It's tinkering.
Meanwhile:
- Chinese universities are integrating AI across entire curricula
- US institutions are forging multi-billion dollar AI partnerships
- European universities are developing sovereign AI capabilities
- DeepSeek, built largely by Chinese university graduates, matches Western AI performance using 95% fewer resources
The window for Australian universities to establish themselves as essential nodes in the AI ecosystem is narrowing. Every semester spent in deliberation is a semester competitors spend building capability.
Why Universities Can't Experiment
Here's the structural problem: universities are institutionally incapable of the experimentation velocity required to navigate this transition.
Governance structures reward caution. Academic governance is designed for stability, not speed. Curriculum changes require committee approval. New programs require years of development. Risk is socialised across so many stakeholders that bold moves become impossible.
Incentives punish failure. An academic who runs ten experiments and has nine fail is not celebrated for learning velocity. They're questioned about their judgement. A dean who tries something bold and it doesn't work faces career consequences. The rational response is to do nothing innovative.
Planning cycles are glacial. Universities operate on academic calendars, strategic planning cycles, and accreditation timelines measured in years. AI capabilities are changing on timescales measured in months. By the time a university has approved an AI initiative, the technology has moved on.
Resource allocation is backward-looking. Budgets are based on historical patterns. New initiatives compete for marginal resources. There's no structural capacity for the kind of rapid reallocation that experimentation requires.
Culture celebrates certainty. Academic culture values being right. Publications are peer-reviewed to ensure accuracy before release. Courses are refined over years. This is the opposite of "fail fast, learn fast."
The result? Universities are structurally optimised for a world that no longer exists. They're bringing quarterly planning to a monthly fight.
What a Head of Failure Would Actually Do
Imagine an Australian university appointed a Head of Failure and Experimentation. Not as a symbolic gesture, but as a genuine mandate to increase experimentation velocity. What would that look like?
Run rapid curriculum experiments. Instead of multi-year program development, test new modules with small cohorts. Does an AI-augmented research methods course improve outcomes? Don't theorise. Run it with 50 students. Measure. Learn. Iterate or kill.
Test alternative credentialing. What if you offered micro-credentials that employers actually valued? What if you let students demonstrate competence through portfolio rather than exam? Don't commission a two-year study. Run a pilot. See what happens.
Experiment with AI integration. How should AI tutoring complement human instruction? What's the right balance? Nobody knows. The only way to find out is to try different models, measure outcomes, and learn. A Head of Failure would be running these experiments continuously.
Prototype new business models. What if continuing education was subscription-based? What if research was funded by outcome rather than grant? What if you partnered with industry to create embedded programs? These are testable propositions. Test them.
Fail publicly and learn loudly. The most valuable thing a Head of Failure could do is make failure visible and celebrated. Share what didn't work. Explain why. Create an institutional culture where experimentation is expected rather than exceptional.
Bartlett's Head of Failure increased his organisation's experiment velocity substantially within months. She works across all teams, applying scientific method: hypothesis, controlled test, measurement, learning. The experiments range from micro-optimisations to fundamental format changes.
A university equivalent could do the same. But it requires structural permission that most institutions haven't granted.
The Intellectual Substrate Opportunity
Here's what universities could become if they moved fast enough: the intellectual substrate of the AI ecosystem.
Not a content provider that AI replaces. Not a credentialing body that employers bypass. But the foundational layer that AI systems depend on.
This means:
Knowledge validation. In an era of AI-generated content and synthetic information, someone needs to validate what's true. Universities have the methodology, the expertise, and the institutional legitimacy to be that validator. But they need to build the systems to do it at AI speed.
Ethical anchoring. AI systems need ethical frameworks, cultural context, and human values embedded in their operation. Universities can provide this. But they need to move from writing ethics papers to building ethics into systems.
Human capability development. AI will handle routine cognitive tasks. Universities should focus on what AI can't do: creative problem-solving, ethical reasoning, cultural intelligence, emotional sophistication. But this requires curriculum transformation, not curriculum tweaks.
Research infrastructure. AI development requires the kind of long-term, curiosity-driven research that commercial labs won't fund. Universities can be this infrastructure. But they need to make their research accessible to AI systems, not locked behind paywalls and incompatible formats.
The intellectual substrate opportunity is real. But it requires universities to experiment their way into it, not plan their way into it. Nobody knows exactly what the substrate looks like. The only way to find out is to build, test, learn, and iterate.
The Burning Platform
I want to be clear about the stakes.
If Australian universities don't adapt, they won't disappear overnight. They'll become increasingly irrelevant. International students will choose institutions that offer AI-augmented learning. Domestic students will question why they're paying for experiences AI provides for free. Employers will develop their own capability assessment. Research funding will flow to institutions that can actually deliver.
The decline will be gradual, then sudden. Like Blockbuster. Like Kodak. Like every institution that saw disruption coming and responded with committees instead of action.
Chinese universities are producing graduates who build AI systems that match Western capabilities at a fraction of the cost. If Australian universities can't compete on AI capability, what exactly is the value proposition?
The burning platform is real. The response requires experimentation at a pace universities have never attempted.
What Would It Take?
To actually appoint a Head of Failure with real mandate, an Australian university would need:
Vice-Chancellor sponsorship. This can't be a middle-management initiative. It needs top-level protection from the organisational antibodies that will try to kill it.
Ring-fenced resources. Not competing for marginal budget. Dedicated funding for experiments that might fail. Enough to run dozens of experiments, knowing most won't work.
Governance exemptions. Permission to bypass normal approval processes for bounded experiments. If every experiment needs committee approval, you won't run experiments.
Cross-functional authority. The Head of Failure needs to work across faculties, not within one silo. Experimentation that's confined to a single department won't transform the institution.
Celebrated failure. Public acknowledgement when experiments don't work. Case studies of what was learned. Promotion criteria that reward experimentation velocity, not just successful outcomes.
Measurement infrastructure. You can't learn from experiments you don't measure. Investment in data capability to actually know what's working and what isn't.
Is any Australian university ready to do this? I don't see it. But the first one that does will have a substantial advantage over those still forming committees.
The Alternative
The alternative to experimentation is hope.
Hope that AI development slows down. Hope that students keep paying for experiences they can get elsewhere. Hope that employers keep valuing credentials. Hope that the traditional model survives intact.
Hope is not a strategy.
Bartlett built a media empire by out-failing competitors. His team runs more experiments, faster, learning what works while others are still planning. That's why The Diary of a CEO grew while traditional media declined.
Australian universities can learn from this. Or they can watch from the sidelines while more agile institutions figure out what the future of education looks like.
The choice is experimentation or irrelevance.
And right now, the sector is choosing irrelevance by default.
Jason La Greca
Jason La Greca is the founder of Teachnology and works in educational technology at a major Australian university. He's watched the gap between AI capability and institutional response widen for years. Teachnology helps organisations build the experimentation capability to navigate disruption.
Ready to Build Experimentation Capability?
Take the AI Readiness Assessment or explore Teachnology Advisory to start navigating disruption through systematic experimentation.