Most organisations aren't ready for AI. The reason has nothing to do with AI.
There's a conversation happening in IT leadership right now about AI readiness. It usually focuses on which tools to adopt, how to train staff, and what policies to write. All reasonable questions. All missing the point.
The reason most organisations aren't ready for AI is that their identity and access management is a mess. And it has been for years. AI just made the consequences impossible to ignore.
The Foundation That Doesn't Exist
Identity and access management is supposed to be the cornerstone of enterprise IT. Every security framework says so. Every audit says so. Every vendor pitch says so.
In practice, most organisations have something closer to this:
- Active Directory groups that were created in 2014 and nobody knows what half of them do
- Users with access to systems they haven't touched in three years because nobody runs access reviews
- Privileged access management that's either nonexistent or limited to a shared admin password in a spreadsheet
- User attributes (department, role, location, classification level) that are either empty, wrong, or so out of date they're useless for any kind of policy enforcement
- No sensitivity labelling on documents, SharePoint sites, Teams channels, or chat history
That last one is the killer. Because when you deploy an AI assistant into an environment where nothing is classified, the AI treats everything as equally accessible. It doesn't distinguish between a draft HR investigation and a team lunch menu. If the user has technical access to both, the AI will happily surface both.
What "AI Readiness" Actually Means
The conversation about AI readiness in most organisations goes something like: "Should we deploy Copilot?" followed by a pilot, some training, and a rollout plan.
The conversation that should happen first: "Do we actually know what data we have, who can access it, and how sensitive it is?"
Most organisations can't answer that question. And deploying AI on top of that uncertainty is like giving every employee a search engine that indexes your entire organisation's knowledge, including the things that were only accessible because permissions were sloppy, not because anyone intended them to be shared.
This is the real AI governance problem. It predates AI by a decade. Organisations just got away with it because humans are slow. A person with overly broad SharePoint access might never stumble across the finance team's restructuring plans. An AI assistant will find them in seconds if someone asks the right question.
Identity Is Three Problems, Not One
When IT leaders talk about identity, they usually mean authentication. Can we verify who this person is? That's solved. MFA, SSO, passwordless. It works.
The harder problems are authorisation and attribution, and almost nobody does them well.
Authorisation is the ongoing question of what this person should be able to access right now, given their current role, project assignments, and need-to-know. Most organisations set permissions when someone joins and then never meaningfully update them. People accumulate access over years. They move teams, change roles, pick up project memberships, and keep every permission they ever had. The principle of least privilege exists in policy documents. In practice, most users have far more access than they need.
Attribution is the layer that makes everything else work: accurate, current, granular attributes on every identity. Department. Role. Location. Cost centre. Security clearance. Project membership. Manager. Employment type. When these attributes are populated correctly, you can build dynamic access policies that adjust automatically. When they're empty or wrong (which they are in most organisations), every access decision becomes manual, inconsistent, and eventually forgotten.
Without solid attribution, you can't do role-based access control properly. You can't automate access reviews. You can't enforce data loss prevention policies. And you definitely can't govern what AI does on behalf of your users.
The Data Classification Gap
Even if identity and access management were perfect (they're not), there's a parallel problem that's arguably worse: most organisations have never classified their data.
Documents sit in SharePoint with default permissions. Teams channels are created with broad membership because it's easier than being precise. Emails contain sensitive information with no labelling. Chat histories contain strategic discussions that are technically accessible to anyone in the tenant.
Microsoft gives you the tools. Sensitivity labels. Information barriers. Data loss prevention policies. Conditional access based on document classification. The tooling exists and has existed for years.
Almost nobody uses it meaningfully. Because classification is boring, labour-intensive, and requires someone to make thousands of decisions about what's sensitive and what isn't. It's the kind of work that never gets prioritised because it doesn't have a visible output. Until an AI assistant surfaces a confidential document in response to a casual query, and then suddenly it's everyone's priority.
What Modern IAM Should Look Like
If I were building an identity and access management programme from scratch today, knowing that AI assistants are going to be embedded in every workplace tool within the next 12 months, this is what I'd prioritise.
1. Get your attributes right first
Before you touch anything else, audit your identity store. Are department, role, location, and manager fields accurate for every user? Are they being maintained when people move roles? Is there a process for keeping them current, or does someone update them manually when they remember?
This is unglamorous work. It's also the foundation everything else depends on. Dynamic group membership, conditional access policies, automated access reviews, AI governance rules: they all rely on attributes being correct. If your attributes are wrong, every policy built on top of them is wrong.
2. Classify your data before you deploy AI
Every document, every site, every channel needs a sensitivity classification. At minimum: public, internal, confidential, highly confidential. This should be enforced at creation time, not retrofitted later.
Yes, this is a massive undertaking for an existing tenant with years of unclassified content. Start with the high-risk areas: HR, finance, legal, executive communications. Use AI to help with the classification (Microsoft Purview can auto-label based on content inspection). But don't deploy a general-purpose AI assistant until you have at least your sensitive data labelled and protected.
3. Implement privileged access management properly
PAM is the thing everyone knows they need and almost nobody implements well. Every admin account, every service account, every elevated permission should be managed through a PAM solution with just-in-time access, session recording, and automatic expiry.
This was important before AI. It's critical now. An AI agent running with admin credentials has the same access as the admin. If those credentials are always-on rather than just-in-time, the blast radius of any compromise (or misconfigured AI workflow) is enormous.
4. Build for AI agents as first-class identities
This is the part that's genuinely new. AI agents are accessing data on behalf of users, and in many cases they're accessing more data than the user would ever manually browse. The agent needs its own identity framework.
Questions to answer:
- When a user asks Copilot a question and it retrieves information from across the tenant, whose permissions apply? The user's? The agent's? A combination?
- If an AI agent is given access to a system to perform a task, how do you audit what it accessed and why?
- When someone builds an automated workflow that uses AI to process documents, what access does that workflow inherit? Does it persist after the person who created it leaves the organisation?
- How do you revoke an AI agent's access? Is there even a mechanism for that in your current setup?
Most organisations haven't thought about any of this. They're deploying AI agents with inherited user permissions and hoping for the best. That's the same approach that gave us the SharePoint permissions mess in the first place, just faster and at larger scale.
5. Continuous access verification, not annual reviews
Annual access reviews are a compliance exercise. Everyone clicks "approve" on everything because they don't have time to evaluate each permission individually. The review gets documented. Nothing changes. The cycle repeats.
Modern IAM needs continuous verification. If someone hasn't accessed a system in 90 days, their access should be automatically flagged for review or revoked. If someone's role changes, their permissions should be automatically adjusted based on their new attributes. If an AI agent hasn't been used in 30 days, its credentials should expire.
This only works if your attributes are right (see point 1) and your data is classified (see point 2). Everything connects.
Why This Matters Now
There's a window right now where organisations can get this right before AI makes their existing problems catastrophically visible. That window is closing.
Every major workplace platform is embedding AI. Microsoft Copilot. Google Gemini for Workspace. Slack AI. Salesforce Einstein. These tools are designed to surface information from across your entire environment. They're only as safe as the permissions and classifications underneath them.
An organisation that deploys AI on top of clean identity management and well-classified data gets a productivity tool. An organisation that deploys AI on top of broken permissions and unclassified data gets a data breach that runs 24/7 and calls itself a feature.
Identity has always been the cornerstone of IT. We've been saying it for 20 years. AI is the thing that finally forces organisations to mean it.
Identity, access, and data governance aren't AI problems. They're IT fundamentals that most organisations have been ignoring for a decade. AI just made the consequences immediate.
Build the Capability to Get This Right
- The Capable Organisation — The playbook for building internal capability instead of outsourcing critical decisions.
- Join the Community — IT leaders, builders, and practitioners working through exactly these problems.
- The HUMAN Protocol — A framework for deploying AI without losing your soul (or your data).
- From Gatekeeper to Enabler — Transforming enterprise governance for the AI era.
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedInIs your organisation building capability or just buying it?
Take the free 12-minute Capability Assessment and find out where you stand. Get a personalised report with actionable recommendations.