Skip to main content
Back to Insights
Education25 min read9 February 2026

UNESCO's AI in Education Framework: A Practitioner's Response

UNESCO's AI competency frameworks for students and teachers have good intentions. But in a world where 108,000 jobs were cut in January alone and five-year plans compress to five months, 'fostering readiness' sounds like rearranging deck chairs. A practitioner's critique of what they got right, where they're hot air, and where they're dangerously behind.

Share:
"AI has the potential to address some of the biggest challenges in education today."— UNESCO, AI in Education page, 2024

It sure does. The question is whether UNESCO's framework is up to the challenge, or whether it's a committee-designed roadmap for a world that no longer exists.

I spent a night buried in data. 108,000 US jobs cut in January alone. Hiring plans down 58%. Anthropic's own safety lead resigning because "our wisdom must grow in equal measure to our capacity." Fifty career specialisations collapsing into one meta-skill. And the skills being defunded in Australian universities (the arts, the humanities, the creative disciplines) are precisely the skills AI cannot replace.

Then I read UNESCO's AI competency frameworks for students and teachers. Published during Digital Learning Week 2024.

Some of it is genuinely good. Some of it is diplomatic noise. And some of it is dangerously behind.

Let's be honest about which is which.

1. Where UNESCO Gets It Right

Credit where it's due. UNESCO isn't completely lost.

The Human-Centred Framing

UNESCO's insistence on a "human-centred approach to AI" is correct and important. In a world where AI companies are racing to ship product and capture market share, someone needs to keep saying: this is supposed to serve people, not the other way around.

Their student framework puts "human-centred mindset" as competency number one. Not AI techniques. Not system design. Human agency first. That's the right ordering, and it's worth acknowledging that a major international body chose to lead with values rather than technical skills.

This aligns directly with what we're seeing in practice. Nvidia's 30,000 engineers tripled their code output with AI tools, but the productivity gains freed senior developers for "challenges requiring human ingenuity." The World Economic Forum's 2025 Future of Jobs Report ranks creative thinking, resilience, curiosity, and leadership among the most important skills for 2025-2030. Human-centred isn't just ethical positioning. It's economic reality.

Ethics as a Core Competency

Both the student and teacher frameworks place ethics prominently. Ethics of AI is the second competency in both frameworks. Responsible use, ethics-by-design, safe practices. This isn't decoration. This is necessary.

We know this is necessary because Mrinank Sharma, who led safeguards research at Anthropic, just resigned saying "the world is in peril" and that "throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions." When the safety lead at the company that built Claude says the pressures to "set aside what matters most" are constant, ethics isn't optional. It's survival.

UNESCO is right to make ethics foundational rather than an add-on elective.

"Complement, Not Replace"

The teacher framework's emphasis that "AI tools should complement, not replace, the vital roles and responsibilities of teachers" is exactly right. Not because AI couldn't technically replace some teaching functions. It could. But because the human elements of teaching (relationship, mentorship, emotional attunement, modelling how an adult engages with uncertainty) are precisely what students need more of in an AI-saturated world.

This is one of the few UNESCO positions that's both ethically sound and practically grounded.

Warning Against Over-Reliance

UNESCO "warns against over-reliance on AI in addressing systemic issues in education, such as teacher shortages and infrastructure inadequacies, which require sustained policy attention and investment."

This is quietly excellent advice that will be ignored by every government looking for a cheap fix. AI is not a substitute for actually funding schools, training teachers, and building infrastructure. Saying this publicly, from a position of international authority, has value. Someone needs to keep pointing out that technology doesn't solve political failures.

The Progression Levels

The student framework's three-level progression (Understand, Apply, Create) is pedagogically sound. It mirrors established learning taxonomies and gives curriculum designers a sensible scaffold. Not every student needs to build AI systems. Every student does need to understand what AI is and how to use it responsibly. The tiered approach is smart and practical.

2. Where UNESCO Is Hot Air

Here's where it gets uncomfortable.

"A Roadmap for Countries"

UNESCO says the frameworks "provide a much-needed roadmap for countries to develop AI education strategies that are ethically informed, inclusive, adaptable and forward-looking."

A roadmap. Let's unpack that.

A roadmap tells you how to get from A to B. It has specific routes, distances, landmarks, and turn-by-turn directions. What UNESCO has published is more like a poster of a mountain with the word "SUMMIT" at the top and "YOU ARE HERE" at the bottom. Inspirational? Sure. A roadmap? Not even close.

Where are the implementation timelines? Where are the resource requirements? Where are the model curricula? Where are the case studies of what worked and what didn't? Where are the specific, measurable benchmarks that tell a country whether its teachers are actually developing these competencies or just attending workshops about them?

The frameworks provide categories. Categories are not guidance. Guidance tells you what to do on Monday morning. UNESCO tells you what to think about doing, eventually, when resources permit.

"Foster the Readiness of Education Policy-Makers"

UNESCO says it developed guidance "to foster the readiness of education policy-makers in artificial intelligence." This phrase does an extraordinary amount of nothing.

"Foster readiness" is bureaucratic code for "we published a PDF." Readiness for what, specifically? By when? Measured how? With what resources?

If I told a teacher "I'm going to foster your readiness in AI," they'd rightly ask: "What does that mean? What will I be able to do that I can't do now? How long will it take? What support will I get?" UNESCO's answer to all four questions is essentially: "Please refer to the framework."

"Generate a Shared Understanding"

Another gem from the guidance document's stated aim: to "generate a shared understanding of the opportunities and challenges that AI offers for education."

A shared understanding. We're in the middle of the most rapid technological transformation in human history. 108,000 jobs were cut in the US in January. Hiring plans are down 58% from projections. Five-year career plans are compressing to five-month sprints. And the international community's contribution is a shared understanding.

Understanding is step one. We needed step one in 2019 when the Beijing Consensus was published. In February 2026, we need steps four through twelve. The conversation has moved on. UNESCO hasn't.

"Ensuring Universal Access to the Internet"

UNESCO recommends "ensuring universal access to the internet" as part of its AI in education strategy.

I agree with this completely. I also note that UNESCO has been recommending universal internet access for over two decades. The recommendation is correct and also utterly disconnected from any mechanism that would make it happen. It's like recommending peace. Hard to argue with. Impossible to implement by publishing a framework.

"Promoting Environmentally-Friendly AI Practices"

This one is fascinating in its vagueness. What are environmentally-friendly AI practices in education, specifically? Which AI tools are environmentally friendly? What's the carbon footprint benchmark? Should schools prefer smaller models over larger ones? Should they limit student API calls? Should they only use AI during off-peak energy hours?

UNESCO doesn't say. It just "promotes" the concept. This is the equivalent of a corporate sustainability report that says "we are committed to reducing our environmental impact" without a single number, target, or deadline.

The "Comprehensive Strategy" Recommendation

UNESCO "recommends that AI competency frameworks for students and teachers be integrated into a comprehensive strategy for AI capacity building across all educational levels."

Let me translate: "We recommend that you do the hard work of figuring out how to implement this." The recommendation to have a strategy is not itself a strategy. It's a meta-recommendation. It's a recommendation about recommendations. It's frameworks all the way down.

3. Where UNESCO Is Behind

This is the section that matters most. Because being wrong is fixable. Being behind might not be.

The Speed Gap

The Beijing Consensus was published in 2019. In AI years, 2019 is the Mesozoic era.

In 2019, GPT-2 was the state of the art. It could write a few coherent paragraphs that were obviously machine-generated. There were no AI coding assistants. There was no DALL-E. Claude didn't exist. ChatGPT wouldn't launch for another three years. The idea that AI would write 4% of all code committed on major platforms, or that a company's entire engineering force would triple its output using AI tools, would have sounded like science fiction.

UNESCO's 2024 frameworks were an update, but they were built on the conceptual foundations of the Beijing Consensus. The architecture of the thinking is 2019 vintage. The paint is 2024. The structure beneath it is ancient.

Here's the fundamental problem: UNESCO frameworks are designed to be universal, applicable to 193 member states from Singapore to South Sudan. That universality requires abstraction. The abstraction makes them useless for countries that are actually ready to act. And the publication cycle (years of consultation, drafting, review, and approval) means every UNESCO document is obsolete by the time it's published.

AI capability improvement nearly doubled in April 2024 alone, according to Epoch AI data. The temporal collapse Nate Jones describes, where five-year plans compress to five months, makes any framework with a multi-year development cycle structurally incapable of keeping up.

UNESCO brings a sedan to a Formula 1 race and wonders why it can't keep up.

What They're Missing Entirely

Adaptability Quotient (AQ)

Forbes published Dr Jason Walker's piece on AQ in February 2026, drawing on research from AQai, IBM, Deloitte, and Goldman Sachs. The framework is clear: IQ gets you hired, EQ helps you succeed, AQ determines if you survive.

IBM found that "willingness to be flexible, agile and adaptable to change" is the number one most critical skill for the workforce, up from number four in 2016. Deloitte changed its entire recruitment process to assess AQ through immersive simulations. Goldman Sachs now screens for it in hiring.

UNESCO's frameworks mention "adaptability" in passing. They don't treat it as the defining competency of the AI age. They should. AQ maps directly to every capability that matters: grit, mental flexibility, growth mindset, resilience, and critically, the ability to unlearn.

Unlearning is the one UNESCO completely misses. Their frameworks are all about learning new things. But the hardest and most important capability in a period of rapid change is letting go of what you already know that's no longer true. Letting go of assumptions about career paths, about what constitutes expertise, about how education works. Teachers who can't unlearn "I'm just a teacher" will never transition. Organisations that can't unlearn "we've always done it this way" will never transform.

The Jobs Crisis

108,000 US jobs cut in January 2026. 118% increase from January 2025. Hiring plans down 58%. 7,624 cuts explicitly cited AI as the reason, but as the Challenger report notes, the real AI displacement is hidden in the other 93% of cuts labelled "restructuring" or "efficiency."

UNESCO's frameworks exist in a world where AI is an educational tool to be understood and used responsibly. They don't exist in a world where AI is actively restructuring labour markets, eliminating career paths, and creating urgency that makes "fostering readiness" sound like rearranging deck chairs.

The frameworks should be screaming about this. They should be saying: the students you're educating today will enter a labour market that looks nothing like the one you're preparing them for. The competencies we're recommending aren't nice-to-haves. They're economic survival skills. The timeline isn't "integrate over the coming years." It's now.

The Arts Defunding Paradox

Australia's Job-Ready Graduate scheme doubled arts fees ($17,399 for humanities vs $4,738 for maths). 48 creative arts degrees have been axed since 2018. Year 12 arts enrolments have collapsed 21%. The government invested $75.6 million in STEM education initiatives and zero in arts.

This is happening while Nvidia's engineers, freed by AI from routine coding, are being redirected to tasks "requiring human ingenuity." While the WEF ranks creative thinking as the top skill for 2025-2030. While Harvard identifies creativity, meaning-making, and judgement as uniquely human capabilities.

The skills being defunded are the skills AI can't replace. UNESCO says nothing about this. Nothing about the perverse incentive structures that are directing students away from the very capabilities that will define their value in an AI economy. Nothing about the fact that an AI safety researcher at Anthropic just resigned to pursue poetry because he believes "poetic truth" is "equally valid" to scientific truth as a way of knowing.

The humanities crisis isn't a sidebar to the AI education story. It IS the AI education story. And UNESCO is silent.

The Assessment Revolution

UNESCO's frameworks describe competencies to develop. They say almost nothing about how to assess them.

This is the fatal gap. Because assessment drives behaviour. If you test recall, students memorise. If you test analysis, students analyse. If you test nothing meaningful, students learn nothing meaningful.

Traditional assessment, the kind UNESCO implicitly assumes will continue, measures the exact capabilities AI already has. Every multiple-choice test, every recall-based exam, every "write a 2,000-word essay" assignment is now testing students against a machine that does it faster, cheaper, and often better.

We've identified seven capabilities that actually matter in an AI-first world: judgement, taste, accountability, emotional intelligence, practical application, creative direction, and ethical reasoning. Each is assessable. Each is what employers desperately want. And almost none of them appear on a standard university transcript.

UNESCO mentions "ethics" as a competency but doesn't explain how you assess ethical reasoning in practice. They mention "problem-solving" and "creativity" but don't address the fact that current assessment systems actively penalise both (by rewarding conformity to rubrics rather than genuine creative risk-taking).

The assessment question is the implementation question. And UNESCO doesn't answer it.

The Temporal Collapse

Nate Jones describes two simultaneous collapses happening in careers right now. The horizontal collapse: fifty specialisations converging into one meta-skill (orchestrating AI agents to get work done). The temporal collapse: the leverage you thought you'd build over five years is compressing into five months.

UNESCO's frameworks are designed for a world with stable career categories and predictable skill development timelines. That world is gone. Engineer, product manager, marketer, analyst, designer, ops lead... these are converging into a single meta-competency. The preparation timeline isn't "across all educational levels" over multiple years. It's now, this semester, this month.

"I'll get to AI eventually" is now the most expensive career decision a person can make. UNESCO's framework reads like it was written for people who have the luxury of eventually.

The Subscription Model Opportunity

Here's something UNESCO hasn't even imagined: higher education's biggest revenue opportunity is becoming the intellectual substrate that grounds AI in verified knowledge.

Universities have centuries of peer-reviewed research, domain expertise, ethical frameworks, and institutional memory. AI companies desperately need this to reduce hallucination and improve reasoning. This isn't charity. It's a business worth $32-125 million per university per year in new revenue streams.

The subscription model ($200/month for ongoing access to a global Socratic community with live seminars, research updates, expert Q&A, stackable micro-credentials, and cross-cultural cohort debates) is sitting there. Coursera does $700 million. MasterClass does $200 million. No university has captured this market yet.

UNESCO's framework thinks about AI as a tool for education. It doesn't think about education as the grounding layer for AI. This is a massive conceptual failure. The relationship between AI and education isn't one-directional. Universities don't just consume AI. They can, and should, anchor it.

4. What UNESCO Should Be Saying Instead

Based on the research stack from tonight, here's what an honest, urgent, actually useful AI education framework would look like.

Principle 1: Wisdom Must Match Capability

Borrow Mrinank Sharma's words directly: "We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world."

This should be the guiding principle of every AI education framework. Not "human-centred approach," which is vague. Wisdom must match capability. That's specific, measurable, and urgent. It means: for every AI capability you develop in students, you must develop corresponding wisdom about when to use it, when not to, and what the consequences are.

It also means: the humanities, the arts, philosophy, ethics, literature, history... these aren't nice-to-haves alongside AI literacy. They ARE the wisdom. Defunding them while building AI capability is like building a car with a bigger engine and removing the steering wheel.

Principle 2: Assess What Actually Matters

Stop recommending competencies without assessment frameworks. Here are the seven capabilities that should form the backbone of AI-age education, with specific, practical assessment models:

Judgement: Case-based decision simulations where there's no right answer. Real stakeholder presentations. Decision portfolios with outcome reflections. Grade the quality of reasoning, not the conclusion.

Taste: Curation portfolios. "Improve this AI output" exercises. Comparative critique assignments. The ability to distinguish between adequate and excellent is the most economically valuable skill in a world drowning in AI-generated content.

Accountability: Team projects with individual accountability frameworks. Public failure analyses. Ethical dilemma portfolios. Owning outcomes, not just completing tasks.

Emotional Intelligence: Structured peer evaluations with behavioural indicators. Conflict resolution simulations. Cross-cultural collaboration projects. Mentoring portfolios.

Practical Application: Industry placements with real deliverables. Community impact projects. Client-facing work. Testing in production, not in the lab.

Creative Direction: Creative briefs plus execution. Directing AI tools to produce specific outcomes. Process-documented portfolios. The skill is envisioning what doesn't exist and guiding its creation.

Ethical Reasoning: Ethics committees. Technology impact assessments. Debates with genuine moral ambiguity where both sides have merit. Policy proposals for genuinely hard problems.

Every one of these is assessable at scale. Every one of these is what employers want. UNESCO could publish these assessment frameworks instead of abstract competency categories and transform education overnight.

Principle 3: AQ is the Core Competency

Adaptability Quotient should be at the centre of any AI education framework. Not mentioned in passing. At the centre.

The AQ model from AQai identifies specific, measurable, trainable components: grit, mental flexibility, mindset, resilience, and unlearning. IBM says it's the number one workforce skill. Deloitte redesigned hiring to assess it. Goldman Sachs screens for it.

A framework that doesn't treat adaptability as the defining capability of the AI age is a framework for a world that no longer exists.

Teachers already have high AQ potential. They adapt daily: 30 kids, no resources, constant policy changes, new curricula every few years. But they don't recognise it. An AI education framework should help them see it and deploy it, both in their teaching and in their own career development.

Principle 4: Timelines Measured in Months, Not Years

The temporal collapse is real. AI capability improvement is accelerating. Five-year plans compress to five months. "Eventually" is the most expensive word in education right now.

Any useful framework must include:

  • What schools should do this term (not this decade)
  • What teachers should learn this month (not this professional development cycle)
  • What students should be assessed on this semester (not when the curriculum review is complete)

UNESCO's multi-year consultation and publication cycle produces guidance that's stale on arrival. If the framework can't match the speed of the technology it's trying to address, it's not guidance. It's history.

Principle 5: Education as AI Infrastructure, Not Just AI Consumer

The biggest conceptual shift UNESCO is missing: education isn't just a sector that uses AI. It can be the intellectual infrastructure that makes AI trustworthy.

Universities produce peer-reviewed research. They employ domain experts. They maintain ethical review processes. They have centuries of verified knowledge. AI companies need all of this to reduce hallucination, improve reasoning, and build systems people can actually trust.

A real framework would position education systems as knowledge partners with AI developers, not passive recipients of AI tools. It would identify specific mechanisms: research licensing, expert evaluation services, knowledge graph construction, ethical review services. It would model the revenue: $32-125 million per university in new revenue streams. It would propose the institutional infrastructure: the Group of Eight collective subscription platform that could generate $480 million per year.

This isn't a fantasy. OpenAI already invested $50 million in NextGenAI, a university research consortium. Google's university partnerships were "instrumental to some of this year's most exciting frontier research." The money is flowing. UNESCO just isn't pointing it out.

Principle 6: Fund What AI Can't Replace

Any framework that doesn't address the arts defunding crisis is complicit in it.

The world is drowning in AI-generated content. The scarce resource is no longer production. It's taste, judgement, and curation. These are arts and humanities skills. Harvard says so. The WEF says so. Nvidia's engineers are being redirected toward tasks requiring "human ingenuity." An AI safety researcher resigned from Anthropic to pursue poetry.

A real framework would say, explicitly: defunding humanities and arts education while building AI capability is economically irrational and strategically dangerous. It would recommend fee parity across disciplines. It would recommend matching STEM investment with arts investment. It would name the paradox: the skills being priced out of reach are the skills the AI economy values most.

5. The Teachnology Counter-Thesis

Here's where we stand in relation to UNESCO. Not against them. But ahead of them.

Where We Agree

We agree on the human-centred principle. We agree that ethics must be foundational, not bolted on. We agree that AI should complement teachers, not replace them. We agree that over-reliance on AI for systemic problems is dangerous.

These aren't trivial agreements. UNESCO has used its institutional weight to establish baseline principles that many governments would otherwise ignore. That has value.

Where We Diverge

Specificity. UNESCO publishes frameworks. We publish assessment rubrics, implementation timelines, revenue models, and specific case studies. Our higher education report doesn't say "integrate AI competency into a comprehensive strategy." It says: deploy an enterprise AI platform in Year 1, launch cross-disciplinary programs in Year 2, achieve 30%+ revenue from non-traditional sources by Year 5. With dollar amounts. With named examples. With step-by-step transitions.

Speed. UNESCO operates on consultation cycles measured in years. We operated on a research cycle measured in hours. Tonight's research stack (108K jobs data, AQ framework, Anthropic resignation, temporal collapse, assessment revolution, intellectual substrate model) was synthesised into actionable intelligence in a single evening. UNESCO's framework for the same territory took years and still doesn't address half of it.

Accountability. UNESCO recommends. We build. Our assessment revolution chapter doesn't just say "assess ethical reasoning." It provides four specific assessment models, explains how to grade them, addresses the calibration challenge, and identifies universities already doing it. The gap between "recommending that countries develop strategies" and "here's the strategy, here's the rubric, here's the timeline" is the gap between UNESCO and Teachnology.

Where We Go Further

The seven capabilities. UNESCO identifies broad competency areas. We identify the specific human capabilities AI cannot replace, with practical assessment models for each. Judgement. Taste. Accountability. Emotional intelligence. Practical application. Creative direction. Ethical reasoning. Each with detailed how-to-assess guidance, spectrum-based rubrics, and real-world examples.

The intellectual substrate. UNESCO thinks about AI in education. We think about education in AI. The intellectual substrate model (universities as the grounding layer for AI reasoning) is a $32-125 million per university opportunity that UNESCO hasn't even imagined. Research licensing. Subscription communities. Corporate partnerships. Consulting. IP commercialisation. Six revenue streams per institution that transform universities from degree factories into knowledge infrastructure.

The subscription model. UNESCO talks about lifelong learning. We've modelled it: $200/month, global Socratic community, live seminars with researchers, stackable micro-credentials, cross-cultural cohort debates. The Group of Eight could generate $480 million per year in recurring revenue from a 5% alumni conversion rate. Coursera proved the market. MasterClass proved the demand. Universities just need to build the product.

AQ as a measurable framework. UNESCO mentions adaptability in passing. We've mapped it as the central competency for the AI age, with specific sub-dimensions (grit, mental flexibility, mindset, resilience, unlearning), assessment approaches, and connections to every other capability that matters.

The arts paradox. UNESCO is silent on the systematic defunding of the skills AI can't replace. We've named it, quantified it ($75.6 million for STEM, zero for arts), and proposed specific policy responses (fee parity, matched investment, integration into professional programs).

What Teachnology Can Offer That UNESCO Can't

Speed. We can publish analysis of today's developments tomorrow. UNESCO publishes analysis of last year's developments next year. In an environment where AI capability doubles annually, speed is substance.

Specificity. We build for practitioners. Teachers who need to know what to do on Monday. Universities that need revenue models with dollar amounts. L&D directors who need assessment frameworks they can implement this semester. UNESCO builds for policy-makers who need to understand the landscape. Both have value. Only one changes practice.

Products. UNESCO publishes PDFs. We build tools. Assessment frameworks. AQ quizzes. Course modules. Skool communities. Newsletter series. Consulting packages. The distance between understanding and action is a product.

Practitioner credibility. We work with teachers, institutions, and organisations every week. We see what's working and what isn't. UNESCO convenes experts who study what might work. The difference between observing education and doing education is the difference between a framework and a solution.

The Bottom Line

UNESCO is trying. Genuinely. In a world of AI hype and panic, their insistence on human-centred, ethical, inclusive approaches is valuable. Their institutional authority gives their positions weight. Their global reach means their frameworks influence policy in 193 countries.

But trying isn't enough when 108,000 jobs are cut in a single month. When hiring plans drop 58%. When the safety lead at a major AI company resigns because wisdom isn't keeping pace with capability. When fifty career specialisations collapse into one meta-skill and five-year plans compress to five months.

The world needed UNESCO's 2024 frameworks in 2019. By 2026, we need something that moves at the speed of the problem.

Here's the challenge, for UNESCO and for everyone working in this space: stop publishing frameworks about what to think. Start publishing playbooks for what to do. Stop recommending that countries develop strategies. Start providing the strategies. Stop fostering readiness. Start building capability.

The assessment revolution isn't coming. It's here. The intellectual substrate opportunity isn't theoretical. OpenAI is already investing $50 million in university partnerships. The arts defunding crisis isn't a policy debate. It's 48 axed creative arts degrees and a 21% collapse in enrolments. The jobs crisis isn't a future risk. It's 108,000 cuts in January with 58% fewer positions being created to replace them.

Readiness was yesterday's goal. Implementation is today's.

The institutions and practitioners who build real things (real assessment models, real revenue streams, real capabilities in real students) will define the next era of education. The ones who publish frameworks about building things will watch from the sidelines.

We know which side Teachnology is on.


This critique draws on research conducted 10 February 2026, including Teachnology's "How Higher Education Can Thrive in an AI-First World" report package.

EducationAI StrategyPolicyAssessment
JL

Written by

Jason La Greca

Founder of Teachnology. Building AI that empowers humans, not replaces them.

Connect on LinkedIn

Is your organisation building capability or just buying it?

Take the free 12-minute Capability Assessment and find out where you stand. Get a personalised report with actionable recommendations.

UNESCO's AI in Education Framework: A Practitioner's Response | Insights | Teachnology