Pull up your last developer job posting. I'll wait.
Now count how many requirements are about implementation: languages, frameworks, years of experience with specific tools, ability to write clean code, debugging skills.
Now count how many are about thinking: understanding complex problems, making sound judgments under uncertainty, anticipating failure modes, reasoning about trade-offs, ethical consideration of impacts.
I'm guessing the ratio is about 10:1. Maybe 20:1.
That ratio made sense five years ago.
It's catastrophically wrong today.
The Skills Inversion
For fifty years, the bottleneck in software development was implementation. You had ideas; turning them into working code required specialised skills that took years to develop.
So we optimised hiring for implementation. Can you write code? In which languages? How efficiently? Can you debug? Can you work with this framework, that database, this deployment pipeline?
These were the right questions when implementation was scarce.
Implementation is no longer scarce.
AI can now generate functional code from natural language descriptions. Not perfect code, but working code. The implementation bottleneck has collapsed. What used to take a senior developer a week can often be produced in minutes.
But here's what AI can't do:
- • Understand problems deeply
- • Reason about consequences
- • Make sound judgments about trade-offs
- • Anticipate failure modes
- • Consider ethical implications
- • Decide what should be built in the first place
The bottleneck has moved. The scarce resource is no longer implementation, it's thinking.
And we're still hiring for implementation.
The Manifesto's Warning
The SAFE-AI Manifesto, signed by 49 researchers from institutions worldwide, puts this bluntly:
"What AI cannot do well right now is deep reasoning, domain understanding, and designing safe, sound solutions for everyone. Thus, in the age of AI, software development becomes a mix of domain understanding, some development and greater emphasis on solution verification."
Domain understanding. Solution verification. These are thinking skills, not coding skills.
The manifesto goes further:
"The skillset of software engineers is becoming versatile and generalistic. Coding remains an essential foundation, but it is no longer sufficient on its own."
No longer sufficient. That's academic language for: your current hiring criteria are broken.
What Thinking Skills Actually Look Like
Let me be specific about what I mean by "thinking skills":
Problem decomposition
The ability to take a vague, complex problem and break it into tractable pieces. To identify what's essential versus incidental. To find the structure underneath the chaos.
AI can't do this. It can generate solutions to well-specified problems. It cannot take an ill-defined mess and figure out what the actual problem is.
Requirements reasoning
The ability to identify requirements that stakeholders can't articulate. To ask questions that reveal hidden assumptions. To distinguish between what people say they want and what they actually need.
The manifesto calls this understanding "critical requirements": the things that absolutely must be right. Finding these requires human judgment, not pattern matching.
Systems thinking
The ability to understand how components interact. To anticipate emergent behaviour. To see second-order effects. To recognise when optimising one part will break another.
AI generates code that works in isolation. Humans must understand how it fits into larger systems.
Failure imagination
The ability to ask "what could go wrong?" and generate plausible answers. To think adversarially. To imagine edge cases, malicious inputs, unexpected interactions.
The manifesto emphasises "critical impact": anticipating how failures could cause irreversible harm. This requires imagination and paranoia that AI doesn't possess.
Trade-off judgment
The ability to weigh competing concerns (speed vs. security, flexibility vs. simplicity, features vs. maintainability) and make sound decisions given specific context and constraints.
AI can generate options. It cannot weigh them against your specific values and circumstances.
Ethical reasoning
The ability to consider who's affected by a system, how they're affected, and whether that's acceptable. To identify stakeholders who aren't in the room. To ask "should we build this?" not just "can we build this?"
The manifesto calls for attention to "broader impacts": effects on people, organisations, and environment that might be indirect or delayed.
The Coding Test Trap
Most technical interviews still centre on coding tests. Whiteboard algorithms. Take-home projects. Live coding exercises.
These tests measure implementation skill. They tell you whether someone can write a sorting algorithm, manipulate data structures, or build a CRUD app under time pressure.
They tell you almost nothing about whether someone can:
- • Understand a complex domain
- • Identify the right problem to solve
- • Anticipate what could go wrong
- • Make sound trade-off decisions
- • Communicate with non-technical stakeholders
- • Take responsibility for outcomes
We're filtering for the skills that are being commoditised while ignoring the skills that are becoming critical.
It's like hiring drivers by testing their ability to shoe horses. Once relevant, now absurd.
What Good Hiring Looks Like Now
If you're serious about hiring for the AI age, here's what changes:
- →Test for problem understanding, not just solution generation.
Give candidates an ambiguous problem. See how they clarify it. What questions do they ask? What assumptions do they surface? How do they structure their thinking? A candidate who jumps straight to coding without understanding the problem is showing you exactly how they'll use AI: prompt and ship without thinking.
- →Test for critical evaluation.
Show candidates AI-generated code. Ask them to review it. Can they identify issues? Do they ask about edge cases? Do they consider security implications? Can they explain why the AI might have made certain choices? The manifesto emphasises that students should learn to "evaluate, critique, and refine AI-generated artifacts." Test for this explicitly.
- →Test for failure imagination.
Describe a system. Ask: "What could go wrong?" See how many failure modes they generate. Do they think about adversarial cases? Scale issues? Integration problems? Human factors? People who can imagine failures prevent them. People who can't are surprised when they happen.
- →Test for communication and alignment.
Can they explain technical concepts to non-technical people? Can they translate business requirements into technical approaches? Can they negotiate trade-offs with stakeholders? These skills matter more when AI handles implementation. The human job is increasingly about alignment and communication.
- →Test for judgment under uncertainty.
Present a scenario with incomplete information. Ask them to make a decision and explain their reasoning. How do they handle ambiguity? Do they acknowledge what they don't know? Can they make progress without perfect information? Real problems are always ambiguous. People who freeze without complete specs are less valuable when speed of iteration increases.
The Team Composition Shift
This isn't just about individual hires. It's about team composition.
The old model:
Mostly implementers, with a few thinkers (architects, analysts, product managers) providing direction.
The new model:
Mostly thinkers, with AI providing implementation capacity.
When one developer with AI can implement what previously took five, you don't need five implementers. You might need two, plus:
- • A domain expert who deeply understands the problem space
- • A systems thinker who sees how pieces interact
- • Someone focused on security and failure modes
- • Someone focused on user experience and stakeholder needs
The ratio of thinkers to implementers inverts.
This is uncomfortable for organisations built around implementation capacity. It means different headcount, different skills, different career paths, different comp structures.
But the discomfort of changing is less than the cost of building teams optimised for a world that no longer exists.
The Education Pipeline Problem
Here's the systemic issue: we're not producing enough thinkers.
Computer science curricula still emphasise implementation. Algorithms, data structures, languages, frameworks. These are the skills that got people jobs for fifty years.
"Modeling should remain central in computing curricula as a means of cultivating reflective, ethical, and systematic thinking... Students should learn not only to create models but also to evaluate, critique, and refine AI-generated artifacts."
Reflective thinking. Ethical thinking. Systematic thinking. Evaluation and critique.
These are not what most CS programs emphasise. They're producing graduates optimised for implementation, graduates who will compete directly with AI rather than complementing it.
If you're hiring, this means:
- • Don't over-index on CS degrees
- • Look for diverse educational backgrounds (philosophy, systems engineering, domain expertise)
- • Value experience that developed judgment, not just technical skill
- • Consider career changers who bring thinking skills from other fields
The best thinkers might not look like traditional developers. That's a feature, not a bug.
The Uncomfortable Truth for Current Developers
If you're a developer reading this, I'm not saying your skills are worthless. Implementation skills still matter, especially for reviewing and refining AI output.
But I am saying: implementation skills alone are no longer enough.
The developers who will thrive are the ones who can do what AI can't. Who can understand problems deeply, reason about consequences, make sound judgments, and take responsibility for outcomes.
If your entire value proposition is "I can write code," you're competing with AI. And AI is getting better every month while working for free.
If your value proposition is "I can understand complex problems, make sound decisions about solutions, and ensure what gets built is actually right," you're complementing AI. You're the human in the loop that makes AI-assisted development actually work.
The manifesto describes this as the shift from System 1 (fast, pattern-matching, what AI does) to System 2 (slow, deliberate reasoning, what humans must do).
You want to be System 2. That's where the value is going.
The Hiring Manager's Dilemma
I know this is hard. Hiring for thinking is harder than hiring for implementation.
Implementation skills are easy to test. Can they write a function? Does it work? How efficient is it? There are right answers.
Thinking skills are fuzzy. How do you test for judgment? How do you measure problem understanding? How do you evaluate ethical reasoning?
Here's my honest answer: it takes more effort. It requires interviewers who can evaluate thinking, not just check code output. It requires scenarios and discussions, not just coding tests. It requires judgment about judgment.
But consider the alternative: hiring people optimised for the old world, then wondering why your AI-augmented team isn't producing better outcomes than before.
You get what you select for. If you select for implementation, you get implementers. If you need thinkers, select for thinking.
The Job Description Rewrite
Here's a practical exercise: rewrite your job descriptions.
Old emphasis:
- • 5+ years experience in Python/JavaScript/whatever
- • Proficiency with React/Angular/whatever framework
- • Experience with AWS/GCP/whatever cloud
- • Strong debugging and problem-solving skills
- • Ability to write clean, maintainable code
New emphasis:
- • Ability to decompose complex, ambiguous problems
- • Track record of identifying requirements others missed
- • Experience anticipating and preventing system failures
- • Demonstrated judgment in making trade-off decisions
- • Ability to evaluate and improve AI-generated outputs
- • Strong communication with technical and non-technical stakeholders
- • Understanding of security, ethics, and broader system impacts
Yes, you still need people who can work with code. But that's table stakes, not differentiator.
The differentiator is thinking. Hire for it.
The Future Belongs to Thinkers
In ten years, the organisations that win will be the ones that figured this out early.
They'll have teams of people who understand problems deeply, make sound judgments, and use AI as a force multiplier for their thinking.
The organisations that lose will be the ones still hiring implementers, wondering why their AI tools aren't producing the results they expected.
AI doesn't think. It generates. Generation without thinking is how you get vibe-coded disasters, security vulnerabilities, and systems that work technically but fail practically.
Thinking is the human job now.
It's the scarce resource. It's what creates value.
Stop hiring coders. Start hiring thinkers.
Written by
Jason La Greca
Jason La Greca is the founder of Teachnology. He's hired a lot of developers over twenty years and has learned (sometimes painfully) that implementation skill and thinking skill are not the same thing. Teachnology helps organisations build teams that can actually leverage AI.
The SAFE-AI Manifesto referenced in this article was authored by Lukyanenko, Samuel, Tegarden, Larsen, and 45 additional researchers.
Connect on LinkedInReady to build thinking capability?
Take the AI Readiness Assessment to understand your team's capabilities.
Start AssessmentNeed help building teams for the AI age?
Learn how Teachnology Advisory helps organisations hire and develop thinkers.
Explore Advisory