AI can now produce almost anything you ask for.
Drafts, analysis, code, research, strategies, plans, campaigns, reports. The raw production of cognitive work has become cheap, fast, and increasingly good. A first draft that took a junior employee two hours now takes ten minutes. Research that required a specialist can be done in an hour with good prompts.
This is remarkable. And it changes nothing about what makes an organisation actually capable.
Because here's what AI cannot do: it cannot decide which of the ten drafts is the right one. It cannot take accountability when the recommendation turns out to be wrong. It cannot develop the taste that distinguishes excellent from merely acceptable.
These three things—judgment, accountability, and taste—are what separate capable organisations from those that merely have access to capable tools.
And most organisations never explicitly built them.
The Three-Layer Reality
Nate B Jones does an amazing job at defining the three-layer reality. If you are interested in AI and don't subscribe to his channel… you probably should. To summarise though, think about the work that happens in any knowledge-intensive organisation as existing in three layers.
Layer 1: Cognitive Production. This is the drafting, analysing, coding, researching, planning, and generating that makes up most visible work. It's the stuff that fills calendars and produces deliverables. AI has made the marginal cost of this work collapse towards zero.
Layer 2: Judgment and Accountability. This is deciding which outputs are good. Signing off on recommendations. Owning the outcome when things go wrong. Choosing between options when the data doesn't give a clear answer. This layer requires humans who are both capable of good judgment and authorised to be accountable.
Layer 3: Physical Execution. This is showing up, fixing, building, caring. Work constrained by atoms rather than bits. No matter how good AI gets at generating text, it cannot show up at your house and fix your furnace.
Here's what's happening: AI is flooding Layer 1 with unprecedented abundance. The volume of cognitive production is exploding, not contracting. Organisations aren't producing less analysis because AI makes it cheap. They're producing far more.
When one layer becomes abundant, the other layers become the binding constraints. And for most knowledge-intensive organisations, that means Layer 2 is now the bottleneck.
Judgment. Accountability. Taste.
These were always important. But they were often implicit, embedded in experienced people who "just knew" what good looked like. Now, with AI generating ten options for every decision, the absence of these capabilities becomes painfully visible.
Judgment: The Capability Nobody Trained For
Judgment is the ability to make good decisions with incomplete information.
Not decisions where the data tells you the answer. Those are easy. Judgment is required precisely when the data doesn't tell you the answer. When there are trade-offs. When reasonable people could disagree. When you have to commit to a direction before you can know if it's right.
AI is very good at generating options. It's very good at presenting information. It's getting better at analysis. But it cannot exercise judgment in the way humans mean when they use the word.
Why not? Because judgment requires something AI doesn't have: skin in the game.
When you exercise judgment, you're making a bet. You're saying "I believe this is the right path, and I'm willing to be proven wrong." That willingness to be wrong, to own the consequences of the decision, is what makes judgment meaningful.
AI can tell you the probability of various outcomes. It can simulate scenarios. But when you ask it "what should we do?", you're asking it to take a position it cannot actually hold. It has no stake. It will not be affected by the outcome. Its "judgment" is really just sophisticated pattern matching dressed up in confident language.
The problem for organisations is that judgment was rarely developed deliberately. It was assumed to emerge naturally as people gained experience. Senior people had good judgment because they'd been around long enough to develop it.
But many organisations hollowed out their senior ranks over the past two decades. They cut costs. They flattened hierarchies. They outsourced expertise. They created environments where experienced people left and weren't replaced.
Now they have AI that can produce unlimited cognitive work, and nobody with the judgment to evaluate it.
What developing judgment actually requires:
- Exposure to decisions and their consequences over time
- Permission to make mistakes and learn from them
- Access to people with good judgment who can explain their reasoning
- Practice making calls when the answer isn't obvious
- Feedback loops that connect decisions to outcomes
None of this happens in a training programme. It happens through structured experience, mentorship, and an organisational culture that values learning over blame.
Accountability: The Capability Organisations Deliberately Destroyed
Accountability is simpler than judgment but somehow rarer.
Accountability means someone owns the outcome. When things go wrong, there's a person who says "that was my call, I got it wrong, here's what I learnt." When things go right, there's a person who can explain why the decision was made and what made it work.
AI cannot be accountable. This isn't a limitation that will be solved with better models. It's fundamental. Accountability requires a being that can be affected by consequences, that has a reputation at stake, that can be held responsible in a meaningful way.
When AI generates a recommendation and that recommendation fails, who is accountable? Not the AI. It doesn't care. It has no reputation to protect, no career to consider, no relationships to maintain.
The human who approved the recommendation is accountable. But here's the problem: many organisations have spent years making accountability fuzzy.
They've created committee structures where no individual owns decisions. They've implemented approval processes that diffuse responsibility across so many people that nobody feels personally accountable. They've built cultures where the safest thing to do is never make a call that could be traced back to you.
This was already a problem before AI. Now it's critical.
Because when AI is generating the options, the only human value-add is the judgment to choose well and the accountability to own the choice. If your organisation has structured itself to avoid individual accountability, you've eliminated the one thing that humans still exclusively provide.
What accountability actually requires:
- Clear ownership of decisions (not committees, not consensus, individuals)
- Authority matched to responsibility (you can't be accountable for outcomes you couldn't influence)
- Consequences that matter (both positive and negative)
- A culture that distinguishes good decisions from good outcomes
- Leaders who model accountability rather than blame-shifting
The last point matters most. When senior leaders say "that was my call" when things go wrong, they create permission for others to do the same. When they blame circumstances, systems, or other people, they teach everyone that accountability is for suckers.
Taste: The Capability That Can't Be Specified
Taste is the hardest to define and maybe the most important.
Taste is knowing what good looks like before you can articulate why. It's the ability to recognise quality in work, to distinguish excellent from acceptable, to feel when something isn't quite right even if you can't immediately explain what's wrong.
AI can produce work that meets specifications. It can follow instructions. It can optimise for measurable criteria. But specifications and criteria never fully capture what makes something excellent.
Think about any domain you know well. There's a difference between work that checks all the boxes and work that's actually good. The difference is taste. And it's real even though it resists formal definition.
The challenge is that taste is developed through immersion, not instruction.
You develop taste in design by looking at thousands of designs and noticing what works. You develop taste in writing by reading widely and paying attention to what resonates. You develop taste in strategy by seeing many strategies play out and observing which approaches succeed.
This takes time. It takes exposure. It takes caring about quality enough to pay attention.
And it's getting rarer.
The same forces that eroded judgment and accountability have eroded taste. When you offshore work to the cheapest provider, you're not developing taste. When you measure everything by speed and cost, you're training people that quality doesn't matter. When you promote based on managing up rather than craft excellence, you're selecting against taste.
Now AI produces work at unprecedented volume, and there aren't enough people with developed taste to evaluate it.
What developing taste actually requires:
- Immersion in high-quality examples of the relevant domain
- Mentorship from people with developed taste who can articulate their reactions
- Time to reflect on what works and what doesn't
- An environment that values quality, not just output
- Permission to reject work that isn't good enough, even if it's "fine"
Taste cannot be rushed. You cannot develop it in a bootcamp. You can only develop it through sustained attention to quality over time.
The Capability Crisis
Put these together and you see the real problem organisations face.
AI has made cognitive production abundant. Organisations can now produce more drafts, more analyses, more code, more content than ever before. The volume of Layer 1 work is exploding.
But the capabilities to do anything useful with that abundance—judgment, accountability, and taste—were never built deliberately. They were emergent properties of having experienced people around. And many organisations no longer have enough experienced people.
The result is a capability crisis.
Organisations generate more options than they can evaluate. They produce more work than anyone can take accountability for. They create more output than anyone has the taste to curate.
The bottleneck has shifted. It's no longer "can we produce enough?" It's "can we make good decisions about what we produce?"
And the answer, for many organisations, is no.
What Capable Organisations Do Differently
The organisations that will thrive in this environment are those that explicitly develop judgment, accountability, and taste as organisational capabilities. Not as nice-to-haves. As strategic priorities.
For judgment:
They create structured opportunities for people to make decisions and see consequences. They pair less experienced people with mentors who explain their reasoning. They build feedback loops that connect choices to outcomes. They tolerate mistakes made in good faith and treat them as learning opportunities.
They don't outsource decisions to consultants. They don't hide behind data that doesn't actually resolve the uncertainty. They build the muscle of making calls under ambiguity.
For accountability:
They assign clear owners to decisions. They match authority to responsibility. They create cultures where saying "that was my call" is respected rather than punished. They distinguish between good decisions and good outcomes, understanding that you can make the right call and still get unlucky.
They don't use committees to diffuse responsibility. They don't create approval chains that ensure nobody feels accountable. They don't punish reasonable bets that don't work out.
For taste:
They invest in developing craft excellence. They hire and retain people with high standards. They create environments where "good enough" isn't good enough. They expose people to excellent work in their domain. They give people time to develop quality rather than optimising only for speed.
They don't treat everything as interchangeable. They don't measure success purely in volume or efficiency. They don't assume AI-generated output is good just because it was generated.
The Uncomfortable Truth
Here's what makes this hard: you can't buy your way out of a capability crisis.
You can buy AI tools. You can hire consultants. You can implement new systems and processes. But you cannot purchase judgment. You cannot outsource accountability. You cannot acquire taste through procurement.
These capabilities have to be built, internally, over time, through deliberate effort.
Organisations that spent years hollowing out expertise to cut costs are now discovering that they hollowed out exactly what they need most. They have the tools to produce abundant cognitive work. They don't have the people to make it valuable.
The rebuild is possible. But it requires recognising that the bottleneck has shifted. The constraint isn't production anymore. The constraint is the human capacity to evaluate, decide, and own outcomes.
Investing in AI to produce more cognitive work, when you already can't evaluate what you produce, is not a strategy. It's an acceleration of chaos.
The real investment is in developing the judgment to choose well, the accountability to own decisions, and the taste to distinguish excellent from acceptable.
That's what makes an organisation capable.
And in a world of abundant AI-generated output, it's the only thing that will matter.
Where to Start
If you're a leader looking at your organisation and recognising this pattern, here's where to begin.
Audit your accountability. Pick any significant decision made in the last quarter. Can you identify a single person who owned that decision? Not a committee. Not "leadership." One person. If you can't, you have an accountability problem.
Map your judgment depth. For each critical function, how many people have the experience and capability to make good calls under uncertainty? If the answer is one or two, you have a judgment concentration risk. What happens when they leave?
Test your taste. Take recent work product and ask: is this excellent, or is it acceptable? Is anyone in the organisation empowered to send it back and demand better? Does anyone have the developed taste to know the difference?
These questions will tell you where you stand. And they'll point to where the real work needs to happen.
Because AI doesn't make these capabilities less important. It makes them the only things that matter.
This is the first in a series exploring how AI is reshaping organisational capability. Next: "The Danger of 'We Produce Cognitive Work'", which explores why mid-tier professional services firms are facing an existential threat, and what they can do about it.
If you're a leader who wants to develop real AI capability, not just tool adoption, the AI Capability Intensive is designed for exactly this challenge. Four weeks of building judgment about AI, learning to evaluate claims, and developing the taste to distinguish real value from hype.
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedIn