Danny Liu doesn't think Australian universities have an AI cheating problem.
He thinks they have an assessment problem. And he's right. But he's not going far enough.
Liu, Professor of Educational Technologies at the University of Sydney and co-chair of its AI in Education working group, recently responded to a series of pieces in The Australian claiming that 80 to 90 per cent of students are using AI to cheat on take-home assessments. His LinkedIn article, "Are Australian universities doing enough to ensure the value of their programs?", pushed back on the predictable chorus of voices calling for a return to pen-and-paper exams.
His argument is sharp. Pen-and-paper exams are a 1,300-year-old solution. They ignore teamwork, communication, and every real-world skill that actually matters in a professional context. They test memory recall in a world where memory recall is the one thing AI does better than any human ever will.
USyd's response has been what Liu calls a "two-lane approach": one lane focused on integrity (can we trust this result?) and another on relevance (does this assessment actually measure what matters?). Other institutions are experimenting too. Deakin has supervised AI-use assessments. ICMS runs oral defences. Portfolio-based approaches and authentic tasks are cropping up across the sector.
All of this is good. Genuinely. But I think the sector is still circling around the edges of a much bigger question.
It's not "how do we stop AI cheating?"
It's "what are we actually certifying?"
This Is a Certification Crisis
New York Magazine's viral piece "Everyone Is Cheating Their Way Through College" painted a grim picture: students using ChatGPT to draft entire assignments, coast through coding projects, and summarise textbooks they never opened. The framing was predictable. Students are lazy. Standards are slipping. The academy is under siege.
But the piece accidentally revealed something far more interesting than student misconduct. It revealed that the gap between "completing an assessment" and "being capable of doing the work" has been growing for decades. AI didn't create that gap. It just made it impossible to ignore.
Think about what a university degree is supposed to certify. Not that someone can write an essay under timed conditions. Not that they can memorise a textbook. It's supposed to certify that this person is capable of performing at a professional level in their chosen field.
Now ask yourself: how much of current university assessment actually tests that?
If a student can use ChatGPT to pass your assessment, the problem isn't the student. The problem is that your assessment was testing something a machine can do. And if a machine can do it, it was never a meaningful measure of human capability in the first place.
Danny Liu knows this. His two-lane approach is an honest attempt to wrestle with it. But the conversation needs to go further than assessment redesign within existing structures. It needs to challenge what certification means at a systemic level.
The Same Crisis Exists in Schools
I spent years in the classroom before I moved into product. I know what it looks like when an assessment system is designed to measure compliance rather than capability, because I lived inside one.
In K-12, we've been running the same playbook for generations. Standardised tests. Written exams. Content recall. Teachers spending their weekends marking assignments that measure a student's ability to follow instructions rather than their ability to think, create, or solve real problems.
When I published recently on the AARE data showing that 39 per cent of Australian teachers plan to leave the profession before retirement, the response was enormous. And one of the recurring themes in those conversations was this: teachers know the assessment system is broken. They've known for years. They just don't have the power to fix it.
AI didn't break school assessments any more than it broke university assessments. It exposed the same fundamental flaw at every level of education: we've been certifying compliance, not capability. And now that a machine can comply better than any student, the whole edifice is cracking.
The difference is that universities have the institutional power, the research capacity, and the governance structures to actually do something about it. The question is whether they will.
What Capability-Based Certification Would Actually Look Like
If we're serious about university assessment reform, we need to stop tinkering with formats and start redesigning what we're measuring. Capability-based assessment doesn't mean bolting oral defences onto existing assignments. It means fundamentally rethinking what a qualification certifies.
Here's a practical framework. Four things universities could start doing right now.
1. Define capability outcomes, not content outcomes.
Every course should be able to answer this question: "What can a graduate of this program actually do?" Not "what do they know?" but "what can they do, in a real-world context, that they couldn't do before?" If the answer is "write an essay about topic X", that's a content outcome. If the answer is "analyse a complex problem in field Y, evaluate competing approaches, and recommend a course of action with supporting evidence", that's a capability outcome. The second one is AI-resistant by design, because it requires judgement, not just output.
2. Build assessment around demonstration, not production.
The essay is not the capability. The essay is a proxy for the capability. And it's a proxy that AI has made worthless. Instead of asking students to produce artefacts that machines can now generate, ask them to demonstrate capability in contexts where the process matters as much as the product. Oral defences, live problem-solving, collaborative projects with peer evaluation, supervised design challenges. These aren't new ideas. They're just ideas that most institutions have been too structurally rigid to implement at scale.
3. Assess longitudinally, not episodically.
USyd's program-level assessment design is heading in the right direction here. A single exam at the end of semester tells you almost nothing about capability. A portfolio that tracks development across an entire program, with regular validation checkpoints, tells you everything. This is harder to administer. It requires coordination across units and faculties. But it's the only model that actually mirrors how capability develops in the real world: progressively, iteratively, and in context.
4. Involve industry in the certification conversation.
Universities are not the only stakeholders in what a degree certifies. Employers are. Professional bodies are. The communities that graduates serve are. If higher education governance is serious about maintaining the value of Australian qualifications, it needs to open the certification conversation beyond the academy. What do employers actually need graduates to be able to do? How would they assess that? The answers might be uncomfortable, but they'd be honest.
The Challenge
Danny Liu and his colleagues at USyd are doing important work. So are the teams at Deakin, ICMS, and the growing number of institutions that are taking this seriously. The two-lane approach is a genuine contribution to the field.
But assessment redesign within existing structures will only take us so far. The deeper question, the one that most universities are still avoiding, is whether the current model of certification is fit for purpose at all.
AI didn't create this crisis. It revealed it. And the institutions that respond by trying to make their existing assessments AI-proof are going to find themselves in an arms race they cannot win.
The institutions that respond by asking "what are we actually certifying, and does it matter?" are the ones that will still be relevant in ten years.
The question isn't whether students are cheating. The question is whether we've been certifying the wrong things all along.
And if the answer is yes, then the urgency isn't about catching cheaters. It's about rebuilding the entire system of trust that makes a qualification worth having.
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedInIs your organisation building capability or just buying it?
Take the free 12-minute Capability Assessment and find out where you stand. Get a personalised report with actionable recommendations.