Two teenagers spent a weekend learning to build apps with AI. When they described what they learned, neither mentioned technology. One talked about structure and organisation. The other talked about depth and avoiding shortcuts.
They think they learned to code. They actually learned something far more valuable.
The Hidden Curriculum
Listen to how the 18-year-old, Shanzey, describes her takeaway: "The biggest [lesson] was the importance of structure. Giving the AI clear, organised instructions felt a lot like managing the various demands of school: exams, essays, activities, and volunteering. Approaching things systemically is often what leads to success."
She's not talking about prompts. She's talking about systems thinking, the ability to break complex situations into manageable components, understand their relationships, and address them in a coherent order.
Now listen to her 13-year-old brother, Usman: "There are no shortcuts to success. You just have to do it the hard way and learn the hard way. I could also implement this into my daily life in school."
He's not talking about debugging. He's talking about rigour, the recognition that quality results require quality inputs, and that attempting to skip steps usually creates more work, not less.
These are precisely the meta-skills that educators have been trying to teach for decades, usually through abstract exercises that students tolerate but rarely internalise.
AI tools are teaching these skills as a byproduct of doing something students actually want to do.
Why Traditional Education Struggles
Schools try to teach critical thinking through essay assignments. They try to teach systems thinking through group projects. They try to teach precision through maths problems.
The problem is that these are simulations. Students know they're exercises. The feedback is delayed (grades come later), abstract (letter or number), and often disconnected from the quality of their thinking (you can get good grades through compliance without genuine understanding).
When you're building an app with AI, the feedback is immediate, concrete, and honest. If your instructions are vague, you get vague output. If you skip a step, the thing doesn't work. If you don't think through the system, the pieces don't fit together.
You can't bullshit your way through a prompt. The AI has no ego to flatter, no authority to defer to, no social pressure to pretend your unclear thinking is actually fine. It just does exactly what you said, which immediately reveals whether what you said made sense.
The Prompt as Diagnostic Tool
Usman described the learning process: "It kind of drove me crazy because I did not know what to do. Whenever he asked AI to fix a bug, it would generate another one."
This is frustrating in the moment but revelatory in retrospect. Each failed prompt is a diagnostic: where was my thinking unclear? What did I assume that I shouldn't have? What context did I forget to provide?
Over time, he developed a mental model: "With practice, he learned what different bugs meant and how to get the AI to resolve them."
This is exactly how expertise develops in any domain, through repeated cycles of action, feedback, and refinement. The difference is that AI compresses these cycles from months to minutes.
A traditional coding education might take years to develop this kind of pattern recognition. These teenagers developed it in a weekend, not because they're exceptional (though they may be), but because the feedback loops are so tight that learning is almost unavoidable.
Communication as Core Competency
The most revealing quote comes from Usman's description of good prompts: "Prompts are supposed to have good details and good information. You have to instruct the AI like a teacher to a student."
Flip that frame for a moment. What he's describing is the ability to explain something so clearly that someone with no context can execute it correctly.
This is the core skill of effective management, clear writing, good teaching, and professional communication. It's the ability to get outside your own head, anticipate what others need to know, and provide it in a structured way.
Most professionals are bad at this. They assume shared context that doesn't exist. They skip steps that seem obvious to them. They communicate in ways that make sense inside their heads but not to anyone else.
AI is unforgiving of these habits. It has no ability to "figure out what you meant." It does what you said. This makes it, accidentally, one of the best communication trainers ever developed.
The Assessment Revolution
Schools are panicking about AI and academic integrity. Students using ChatGPT to write essays. Assessment becoming meaningless.
But there's another possibility: AI tools could become the assessment. Not the AI's output, but the student's ability to direct it.
Imagine an exam where students get the same AI access but different prompts based on their understanding. Where the evaluation is of their ability to decompose a problem, communicate a solution, iterate based on feedback, and produce a working result.
This would test exactly the skills that matter in professional contexts: not memorised knowledge, but the ability to apply understanding to produce outcomes.
It would also be cheat-proof in a fundamental way. You can't plagiarise the ability to think clearly. You can't copy someone else's capacity for systematic problem-solving. The skill is inseparable from its demonstration.
The Workplace Implications
If AI tools can teach systems thinking, rigorous execution, and precise communication to teenagers in a weekend, what are the implications for professional development?
Most corporate training is lectures about concepts that people immediately forget. It's abstract, disconnected from real work, and designed more for compliance documentation than actual skill development.
What if instead, we gave people real problems and AI tools and let them struggle through the tight feedback loops until understanding emerged? What if professional development looked more like a hackathon and less like a webinar?
The teenagers in this story weren't taught meta-skills. They developed them as a byproduct of trying to build something they cared about. The learning was incidental to the doing.
That might be the model: stop trying to teach skills directly and start creating conditions where skills develop as a natural consequence of pursuing meaningful goals.
What We Should Actually Worry About
The concern about AI in education is backwards. We're worried about students using AI to avoid thinking. We should be excited about AI forcing students to think more clearly than they've ever had to.
A student who can decompose a complex problem, communicate requirements precisely, iterate based on feedback, and produce a working solution hasn't cheated the educational process. They've demonstrated exactly what education is supposed to produce.
The question isn't whether AI belongs in education. It's whether we're ready to update our assessments, our pedagogies, and our definitions of competence to match what these tools reveal.
Two teenagers learned more about clear thinking in a weekend than most students learn in a semester of formal instruction. They just thought they were learning to code.
Jason La Greca
Jason La Greca is the founder of Teachnology and works in educational technology at a major Australian university. He's spent twenty years watching schools try to teach meta-skills through abstract exercises, and is fascinated by AI's accidental success at teaching them through concrete experience. Teachnology helps educational institutions redesign learning for the AI era.
Referenced Article
This article references the Business Insider story about Usman and Shanzey Asif, who learned vibe coding and competed in Cursor's 24-hour hackathon in Singapore.
Read the full story on Business InsiderReady to Redesign Learning?
Take the AI Readiness Assessment or explore Teachnology Advisory to understand how to teach meta-skills through meaningful making rather than abstract exercises.