Let me describe a scene you'll recognise.
There's a meeting. It's about AI strategy. Around the table: a senior executive who read an article about ChatGPT on a flight, a consultant from a firm whose AI credentials are a slide deck they bought from another firm, a vendor rep whose entire job is to sell you something, an IT director who's been blocking innovation for fifteen years, and someone from legal who's there to say no to everything.
These people are about to decide how your organisation responds to the most significant technological shift in decades.
None of them have ever built anything with AI. None of them use AI tools daily. None of them could explain how a large language model actually works. None of them have shipped a product, automated a workflow, or solved a real problem with these technologies.
They know two-fifths of f*%k all. And they're determining your future.
Why are you letting this happen?
The Blind Leading the Blind
Here's what's actually going on in most organisations' AI decision-making:
The executives are scared.
They know AI is important because everyone says it is. They don't understand it, so they do what executives always do with things they don't understand: commission a strategy, hire consultants, form a committee. Activity that feels like progress while avoiding actual decisions.
The consultants are opportunists.
Six months ago they were selling digital transformation. Now they're selling AI transformation. Same slides, different buzzwords. They've never built an AI application, never integrated a model into a workflow, never dealt with hallucinations or prompt engineering. But they'll happily charge you $500K for a roadmap.
The vendors are salespeople.
Their job is to sell you their platform, regardless of whether it fits your needs. They'll demo impressive capabilities that have nothing to do with your actual problems. They'll promise features that are "on the roadmap." They'll lock you into contracts before you understand what you're buying.
IT leadership is defensive.
Many IT directors built their careers on controlling technology decisions, managing vendors, and saying no to shadow IT. AI threatens that model. So they slow-roll everything, demand impossible security guarantees, and treat every new tool as a threat rather than an opportunity.
Legal and compliance are professional pessimists.
Their job is to identify risks, so that's what they do. They'll find seventeen reasons why you can't use AI, without ever weighing those risks against the risk of doing nothing while competitors move ahead.
Put these people in a room and what do you get? Lowest common denominator decisions. Expensive pilots that go nowhere. Policies that ban useful tools while accomplishing nothing. Strategies that are obsolete before they're implemented.
The people with the least understanding have the most influence. The people actually using AI (often individual contributors who've figured it out despite their organisation, not because of it) are nowhere near the decision-making table.
The Expertise Vacuum
Let me be specific about what "knowing two-fifths of f*%k all" looks like:
They've never prompted a model.
Not seriously. Maybe they've typed "write me an email" into ChatGPT once. They don't understand that prompting is a skill, that context matters, that you can get wildly different outputs depending on how you ask.
They don't understand capabilities or limitations.
They either think AI can do everything (it can't) or nothing useful (it can). They don't know what models are good at, what they struggle with, or how those boundaries are shifting monthly.
They've never built anything.
Never integrated an API. Never automated a workflow. Never created a tool that solves a specific problem. Their entire understanding is theoretical, based on demos, articles, and vendor presentations.
They don't know what's possible now versus later.
They confuse what AI might do in five years with what it can do today. They either wait for capabilities that already exist or chase science fiction while ignoring practical applications.
They can't distinguish hype from reality.
Every vendor claims AI capabilities now. Most are bullshit: traditional software with a chatbot bolted on, or "AI-powered" features that are barely functional. Without hands-on experience, you can't tell the difference.
These are the people setting your AI policy. Writing your AI strategy. Deciding which tools you can and can't use. Evaluating vendors. Allocating budget.
They're making decisions that will shape your organisation for years, based on an understanding of AI that's superficial at best and completely wrong at worst.
Meanwhile, Your Competitors...
While your committee debates AI governance frameworks, somewhere a smaller competitor has:
- →Given their team access to Claude or GPT-4 and said "figure out how to use this"
- →Built internal tools that automate tedious work
- →Shipped AI-enhanced features to customers
- →Learned from failures and iterated
- →Developed actual expertise through doing
They didn't wait for a strategy. They didn't hire consultants. They didn't form a committee. They just started using the tools, building things, and learning.
Now they have something your organisation doesn't: practical knowledge. They know what works and what doesn't. They've built capability. They can move faster because they've already made the mistakes and learned from them.
Your organisation is still debating whether to allow ChatGPT on corporate devices.
The gap is widening every month.
And it's not because your competitors are smarter or better resourced. It's because they didn't let people who know nothing about AI make all the decisions about AI.
The Consultant Problem
I need to talk specifically about consultants, because they're a huge part of this problem.
The big consulting firms have pivoted hard to AI. It's the hot thing, so they're selling it. But here's what they're actually offering:
Recycled frameworks.
The same change management and transformation frameworks they've been selling for twenty years, with "AI" inserted wherever it used to say "digital" or "cloud." The methodology is identical because they don't actually understand what's different about AI.
Junior staff doing the work.
The partner who sold the engagement has impressive credentials. The people actually doing the work are two years out of university and learning on your dime. They've never built AI applications either, they're just better at making slides about them.
Vendor relationships disguised as advice.
Many consulting recommendations mysteriously align with vendors who have partnership agreements with the consulting firm. You're not getting independent advice; you're getting a sales channel with a strategy wrapper.
Endless discovery phases.
Consultants get paid by the hour or the week. They have every incentive to extend engagements, add scope, and defer decisions. An eight-week strategy becomes sixteen weeks becomes a permanent advisory relationship. Meanwhile, nothing actually gets built.
No accountability for outcomes.
When the AI strategy fails (and most do) the consultants are long gone. They delivered their roadmap. Implementation was someone else's problem. They'll be happy to come back and sell you a "course correction" engagement.
I've watched this happen. Multiple times. Organisations spend hundreds of thousands on AI strategy, end up with a document that says obvious things in complicated ways, and are no closer to actually using AI effectively than when they started.
The consultants knew two-fifths of f*%k all. They were just better at hiding it.
The People Who Actually Know
Here's who does understand AI in your organisation:
The individual contributor who's been using it daily.
There's someone (probably several someones) who figured out that AI could help them do their job better and just started using it. They didn't ask permission. They didn't wait for a strategy. They experimented, learned, and now they're dramatically more productive than their peers.
The curious technical person.
Someone in your IT department or product team who's been building things on weekends. Who's integrated APIs, built prototypes, played with different models. They understand practically what works and what doesn't.
The frustrated operator.
Someone who sees inefficiencies in their workflow every day and has been thinking about how AI could help. They don't have technical skills, but they have deep domain knowledge and specific problems to solve.
The recent hire.
Someone who joined in the last year or two, probably younger, who's grown up with these tools and uses them instinctively. To them, not using AI is like not using Google, incomprehensible.
These people exist in your organisation. They have practical knowledge that your strategy committee lacks entirely. And they're almost certainly not in the room when AI decisions are made.
You're ignoring your internal experts while paying external non-experts to tell you what to do.
What This Costs You
The cost of having AI decisions made by people who don't understand AI:
Bad tool choices
You buy platforms that don't fit your needs because the decision-makers couldn't evaluate them properly.
Missed opportunities
Practical applications that could save time and money are never identified because the people who'd see them aren't consulted.
Talent loss
Your best people (the ones who understand this stuff) get frustrated and leave for somewhere that lets them actually use their skills.
Capability atrophy
Every month in strategy paralysis is a month you're not building practical skills. The gap between what you can do and should be able to do gets wider.
Wasted money
Consultants who deliver nothing. Vendors who oversell and underdeliver. Pilots that go nowhere. Strategies that gather dust.
Competitive disadvantage
While you're deliberating, others are doing. When you finally get your act together, they're years ahead. Some of that gap may never close.
The people making your AI decisions are expensive in every possible way.
What Should Happen Instead
Here's what competent AI decision-making looks like:
Put practitioners in the room.
The people actually using AI should be part of strategic discussions. Not as observers, as decision-makers. Their practical knowledge is more valuable than any consultant's framework.
Value experimentation over strategy.
Stop trying to figure everything out in advance. Give people tools, time, and permission to experiment. Learn by doing. Strategy should emerge from practical experience, not precede it.
Start small and iterate.
Don't launch a massive AI transformation program. Pick one problem, build one solution, learn from it. Then do it again. Capability compounds through repetition, not through planning.
Be sceptical of consultants.
If someone's selling you AI strategy, ask them what they've built. Ask for specific examples of AI applications they've personally created. If they can't answer, they're selling theory, not expertise.
Evaluate vendors with informed buyers.
Don't let people who've never used AI tools evaluate AI vendors. Include practitioners who can ask hard questions, spot bullshit, and distinguish real capabilities from marketing.
Accept imperfection.
AI tools aren't perfect. They hallucinate, they make mistakes, they require human oversight. Waiting for perfect solutions means waiting forever. Start using imperfect tools now and develop the skills to use them well.
Fire the blockers.
Some people will never get on board. They'll find reasons to say no to everything. At some point, they become an obstacle that has to be removed. Harsh but true.
The Question You Need to Ask
Next time you're in a meeting about AI (a strategy session, a vendor evaluation, a policy discussion) look around the room and ask yourself:
Who here has actually built something with AI?
Not read about it. Not attended a conference. Not watched a demo. Built something. Shipped something. Solved a real problem with these tools.
If the answer is nobody, you have a problem. You're letting people who know two-fifths of f*%k all make decisions that will shape your organisation for years.
Find the people who actually understand. Get them in the room. Listen to them. Give them authority.
Or keep doing what you're doing, and watch more capable organisations leave you behind.
Your choice.
Written by
Jason La Greca
He's spent 20 years watching organisations make bad technology decisions, and he's tired of it. Teachnology helps organisations build actual AI capability, not strategies, not roadmaps, capability.
Connect on LinkedInReady to stop letting the uninformed determine your future?
Take the AI Readiness Assessment to see where you actually stand.
Start AssessmentWant help building real capability?
Teachnology Advisory helps organisations build, not just strategise.
Explore Advisory