Skip to main content
Back to Insights
Learning Science10 min readDecember 2024

AI Should Make Learning Harder, Not Easier

The entire EdTech industry has it backwards. Friction isn't the enemy of learning, it's the point.

Here's the sales pitch you've heard a thousand times:

"Our AI makes learning easy! Students get instant answers. No more struggling with difficult concepts. Personalised support available 24/7. Frictionless education for the modern learner!"

It sounds great. It's also completely wrong.

If your goal is actually helping people learn (not just helping them complete assignments) then making things easier is often the worst thing you can do.

The struggle is where learning happens. Remove the struggle, and you remove the learning.


The Uncomfortable Research

Educational research has known this for decades, but it's deeply unfashionable to say it out loud:

Desirable difficulties improve long-term retention.

When learning feels easy, it often isn't sticking. When it feels hard, that's frequently when the real encoding is happening. The discomfort is a feature, not a bug.

Retrieval practice beats re-reading.

Testing yourself (struggling to recall information) is far more effective than passively reviewing material. The effort of retrieval strengthens memory in ways that effortless review doesn't.

Spaced repetition works because it's harder.

Spreading practice over time means you're always working at the edge of forgetting. That's uncomfortable. It's also dramatically more effective than massed practice.

Generation effects are real.

When you generate an answer yourself (even if you get it wrong) you learn more than when you're simply given the answer. The act of production, of struggling to create, changes how information is encoded.

Productive failure outperforms direct instruction.

Students who struggle with a problem before being taught the solution often outperform students who receive instruction first. The initial failure creates a mental framework that makes subsequent learning more meaningful.

This isn't controversial in learning science. It's well-established. But it runs directly counter to what most EdTech companies are building, and what most AI education tools promise.


What AI Is Actually Doing

Watch how most students use AI for learning:

1

They encounter a difficult concept

2

They ask the AI to explain it

3

They read the explanation

4

They feel like they understand

5

They move on

What's missing? The struggle. The attempt before the answer. The productive failure that creates the mental hooks for understanding to attach to.

The AI short-circuits the learning process. It delivers the answer before the question has fully formed. It provides clarity before confusion has done its work.

Students feel like they're learning because understanding feels good. But feeling like you understand and actually understanding are different things.

The research calls this the "fluency illusion": when information goes in easily, we assume it's sticking. Often it isn't.

AI-assisted learning, done badly, is a fluency illusion machine. It makes everything feel easy while ensuring nothing is retained.


The Completion Trap

Here's what's actually happening in education right now:

Students are using AI to complete assignments faster. Educators see completion rates going up. Everyone assumes this means learning is improving.

It doesn't.

Completion is not learning. An assignment exists to create a learning experience, the struggle of doing it is the point. When AI does the struggling, the student gets the completion without the learning.

It's like hiring someone to go to the gym for you. You can say you "worked out" because someone did exercises in your name. But your muscles didn't get stronger. The work didn't happen to you.

Universities are measuring the wrong thing. They're tracking outputs (assignments submitted, courses completed, credentials awarded) when they should be tracking capability (can the student actually do the thing they supposedly learned?).

AI makes the outputs easier to produce while potentially making the capability development worse. And nobody's noticing because nobody's looking at the right metrics.


The Productive Struggle Framework

Let me introduce a concept I've been developing: productive struggle.

Not all struggle is good. Banging your head against a wall with no support isn't productive, it's just frustrating. But the right amount of struggle, with the right scaffolding, is essential for learning.

Productive struggle has three characteristics:

1

It's calibrated to the learner.

The difficulty is at the edge of the student's current capability: hard enough to require effort, not so hard that it's demoralising. This is Vygotsky's zone of proximal development, and it's different for every learner.

2

It's supported, not eliminated.

The learner has access to help, but the help doesn't remove the need for effort. Think of it like a climbing wall with a safety harness: you can fall, but you won't die. The harness doesn't climb for you.

3

It's meaningful.

The struggle connects to something the learner cares about. It's not arbitrary difficulty for its own sake. There's a reason to push through.

AI could be brilliant at enabling productive struggle. It could calibrate difficulty in real-time. It could provide scaffolding that supports without replacing effort. It could make struggle meaningful by connecting it to learner goals.

Instead, most AI tools do the opposite. They eliminate struggle entirely because that's what feels good in the moment, and what's easy to sell.


What Good AI-Assisted Learning Looks Like

Here's how AI should work in education:

It should ask questions before giving answers.

When a student asks "What's the answer to this?" the AI's first response should be "What have you tried so far?" or "What do you think the answer might be?" Force the generation attempt before providing information.

It should provide hints, not solutions.

Point the learner in the right direction without doing the work for them. "Have you considered looking at this from the perspective of X?" is better than "Here's the complete answer."

It should make the student do the synthesis.

Instead of providing a perfect summary, provide raw materials and ask the student to synthesise. The act of synthesis is where understanding develops.

It should track struggle, not just completion.

How long did the student work on this before asking for help? How many attempts did they make? Did they engage with the difficulty or immediately outsource it? These are the metrics that matter.

It should deliberately introduce difficulty.

Space out practice. Interleave topics. Require retrieval instead of recognition. Make the easy path unavailable. The AI should be a struggle-choreographer, not a struggle-eliminator.

It should distinguish between productive and unproductive struggle.

Some struggle is just spinning wheels. Good AI identifies when a student is stuck in an unproductive loop and provides targeted support, without eliminating the productive difficulty.

This is harder to build than "here's the answer to your question." It's also harder to sell. "Our AI makes you struggle more!" isn't a great marketing tagline.

But it's what actually works.


Why This Matters for Ink Wise

This is exactly why I'm building Ink Wise.

Ink Wise isn't an AI that writes for students. It isn't a detector trying to catch cheaters. It's a tool that helps educators understand how students write and develop their capabilities over time.

The philosophy behind it is productive struggle. Writing is a skill that develops through practice: through the effort of putting thoughts into words, struggling with structure, wrestling with clarity. You can't outsource that struggle and still develop the skill.

Most AI writing tools either do the writing for students (killing the learning) or try to detect AI use (an arms race that's already lost). Neither approach addresses the fundamental question: how do we help students become better writers in a world where AI can write for them?

The answer is to focus on the process, not just the output. To make the struggle visible. To help educators understand where students are genuinely developing capability and where they're just producing text.

That's a harder problem than "write my essay for me." It's also the right problem.


The EdTech Industry's Perverse Incentives

Here's why most EdTech gets this wrong:

Easier sells better.

Products that promise to reduce difficulty are easier to market than products that promise appropriate difficulty. "Make learning easy!" beats "Make learning appropriately hard!" in every A/B test.

Engagement metrics reward ease.

EdTech companies track time on platform, completion rates, user satisfaction scores. All of these improve when you make things easier. None of them measure actual learning.

Students want easy.

In the moment, learners prefer less difficulty. They'll choose the AI that gives answers over the AI that asks questions. They'll give better reviews to tools that reduce their workload.

Procurement decisions aren't made by learners.

The people buying EdTech aren't the ones using it. They see demos that look impressive and metrics that look good. They don't see whether students actually learned anything.

The entire incentive structure pushes toward ease, even though ease undermines the actual goal.

This is why Ink Wise isn't designed to make students happy in the moment. It's designed to develop their capability over time. Those are different objectives, and I've chosen the one that actually matters.


What Universities Should Demand

If you're an educator or education leader evaluating AI tools, here's what you should ask:

"How does this tool handle struggle?"

Does it eliminate difficulty or calibrate it? Does it give answers or prompt thinking? Is there any friction in the experience, or is it all frictionless?

"What learning science is this based on?"

Can the vendor cite research on desirable difficulties, retrieval practice, productive failure? Or is the product based on assumptions about what learners want rather than what helps them learn?

"What does this tool measure?"

Completion rates and satisfaction scores are easy to track but largely meaningless. What evidence exists that the tool develops capability?

"What happens when students struggle?"

Does the AI immediately rescue them, or does it let them sit in the discomfort long enough to learn? Is there a deliberate pedagogical approach to difficulty?

"Would this tool work without AI?"

Sometimes the best AI use in education is minimal: providing scaffolding and assessment, not replacing the core work. Be suspicious of tools where the AI does everything.

If vendors can't answer these questions, they're selling snake oil. Pleasant-tasting snake oil, perhaps, but snake oil nonetheless.


The Capability Crisis Coming

Here's what happens if we get this wrong:

A generation of students graduates having used AI to complete every difficult assignment. They have credentials. They lack capabilities. They've never really struggled with hard problems because they always had an AI to struggle for them.

They enter the workforce and discover that jobs require the capabilities they never developed. They can't write clearly because AI always wrote for them. They can't think through complex problems because AI always provided the answers. They can't persist through difficulty because they never had to.

Employers notice. They stop trusting credentials. They develop their own assessments. The value of a university degree (already under pressure) collapses further.

Universities, having optimised for completion over capability, find themselves producing graduates nobody wants to hire.

This isn't inevitable. But it's where current trends lead if we don't change course.


A Different Vision

Here's what I want to see:

AI that acts as a struggle-choreographer. That calibrates difficulty to the learner. That provides scaffolding without eliminating effort. That makes productive struggle possible for everyone, not just those with access to great teachers.

AI that helps educators understand their students: where they're struggling productively, where they're stuck, where they're taking shortcuts that undermine their own development.

AI that makes learning harder in exactly the right ways, while making it more accessible, more personalised, and more effective.

This is harder to build than the current crop of "AI makes learning easy" tools. It's also the only approach that actually works.

We can use AI to create a generation of people who can't do anything because they've never had to struggle. Or we can use it to create better learning experiences than have ever been possible: experiences that develop genuine capability through supported, meaningful difficulty.

The technology is neutral. The choice is ours.

AI in EducationLearning ScienceProductive StruggleEdTechCapability DevelopmentInk Wise
JL

Written by

Jason La Greca

Founder of Teachnology. Ink Wise, his AI-powered writing assessment tool, is built on the principle that struggle is essential to learning, and that AI should support that struggle, not eliminate it.

Connect on LinkedIn

Educator who believes capability matters more than completion?

Learn more about Ink Wise and the productive struggle approach.

Visit Ink Wise

Ready to think differently about AI in education?

Take the AI Readiness Assessment to see where you stand.

Start Assessment