Most enterprise governance frameworks were designed for a world that no longer exists.
They were built when technology projects took years, when change was slow, and when the biggest risk was doing something wrong. The frameworks worked. They protected organisations from costly mistakes. They ensured consistency. They created accountability.
But somewhere along the way, protection became prevention. Governance became synonymous with "slow." Architecture Review Boards became bottlenecks. Security reviews became backlogs. Compliance became theatre.
I recently reviewed a typical enterprise governance framework. It was comprehensive, thoughtful, and well-designed. It covered everything: architecture principles, security standards, data governance, AI oversight, project management requirements.
It also described a process that takes 14-30 weeks before development even begins.
Fourteen to thirty weeks. Just to get permission to start building.
This isn't a criticism of the people who created these frameworks. They were solving real problems. They were preventing real failures. The frameworks represent accumulated wisdom about what can go wrong and how to prevent it.
But here's what's changed: the cost of being slow now exceeds the cost of being wrong.
The Hidden Tax
Every organisation pays a governance tax. It's the time between "we need this" and "we can start building this."
In the framework I reviewed, a medium-complexity project involves:
- Requirements analysis (2-6 weeks)
- Solution evaluation and RFP (4-7 weeks)
- Supplier selection (4-8 weeks)
- Vendor risk assessment (1-3 weeks)
- Architecture Review Board (1-2 weeks)
- AI Board review if applicable (2-4 weeks)
- Contract negotiation (2-4 weeks)
Then, if you're lucky, you can start development.
The framework notes helpfully that "many activities can be conducted in parallel." This is true. It's also an admission that organisations have built sequential processes for activities that don't need to be sequential.
Here's what this tax actually costs:
Speed to value. Whilst you're in governance, competitors are shipping. Customers are waiting. Problems are festering. The business case that justified the project is eroding.
Employee morale. Talented people don't want to spend months navigating approval processes. They want to build things. Every week in governance purgatory is a week they're questioning whether this is the right organisation for them.
Shadow IT. When official channels are too slow, people find unofficial channels. They use unsanctioned tools. They build workarounds. The governance you created to manage risk creates different risks.
Opportunity cost. Every project stuck in governance is capacity not available for the next project. The queue grows. The backlog expands. Eventually, governance teams spend more time managing the queue than actually governing.
None of this means governance is unnecessary. It means governance needs to evolve.
What Good Looks Like
The best governance frameworks I've seen share common characteristics. They're not less rigorous. They're differently rigorous.
Risk-proportionate. Not every project needs the same level of oversight. A customer-facing AI system needs more scrutiny than an internal reporting dashboard. Good governance triages ruthlessly, applying heavy process where risks are high and light process where risks are low.
Parallel by default. Architecture review, security assessment, and compliance checks can happen simultaneously. They don't need to wait for each other. Good governance designs for parallelism, not sequence.
Self-service where possible. Most projects don't need a board to review them. They need clear standards and the tools to check compliance themselves. Good governance enables teams to self-assess, reserving human review for exceptions and edge cases.
Outcome-focused. The goal isn't to follow a process. It's to ship good technology safely. Good governance measures outcomes (quality, security, value delivered), not just process compliance.
Embedded, not external. Governance that happens to projects is slower than governance that happens within projects. Good governance embeds reviewers and standards into delivery teams, not outside them.
The question isn't whether to have governance. It's whether your governance is designed for 2015 or 2025.
The AI Opportunity
Here's where it gets interesting.
The same AI capabilities that are disrupting your business can transform your governance functions. Not by replacing human judgment, but by handling the volume work that buries your governance teams.
Consider what an Architecture Review Board actually does:
- Reviews solution designs against principles and standards
- Identifies patterns and anti-patterns
- Assesses risks
- Documents decisions
- Tracks exceptions and technical debt
Most of this is pattern recognition. It's comparing a proposed solution against known good patterns and known risks. It's checking whether standards are met. It's identifying where a design deviates from norms.
Pattern recognition is exactly what AI does well.
Pre-review analysis. Before a solution reaches the ARB, AI can analyse it against architecture principles, flag potential issues, and identify questions the board should ask. The board spends less time on obvious items and more time on judgment calls.
Standards compliance checking. AI can verify whether a solution meets documented standards automatically. Does it use approved authentication patterns? Does it follow integration standards? Is the data residency compliant? These are checkable facts, not judgment calls.
Risk identification. AI can scan a solution design and identify risk patterns based on previous projects. "This integration approach caused problems in three previous implementations" is valuable context for a board discussion.
Documentation generation. Architecture Decision Records, risk assessments, compliance checklists can be drafted by AI from design inputs, leaving humans to review and refine rather than create from scratch.
Exception tracking. AI can maintain the catalogue of exceptions and exemptions, flag when they're expiring, and identify patterns suggesting a standard needs updating.
The same logic applies to security review:
- Threat models can be drafted automatically from architecture inputs
- Vulnerability assessments can be continuous, not point-in-time
- Policy compliance can be checked automatically against code and configuration
- Risk ratings can be suggested based on patterns from previous assessments
And to governance more broadly:
- Policy questions can be answered instantly through AI assistants
- Compliance status can be monitored continuously
- Risk assessments can be pre-populated with relevant factors
- Decision support briefs can be generated from available data
None of this removes humans from governance. It removes the mechanical work that prevents humans from doing what only humans can do: exercise judgment on novel situations.
The Transformation Path
Moving from governance-as-bottleneck to governance-as-enabler isn't a single project. It's a capability shift. Here's how organisations are approaching it:
Start with the queue. What's waiting for review right now? What's been waiting longest? The backlog tells you where governance is failing. Focus AI augmentation on the highest-volume, longest-wait reviews first.
Identify the repeatable. For each governance review, ask: "What percentage of this is checking known patterns versus exercising new judgment?" The checkable portions are candidates for AI assistance.
Design self-service paths. Create guided assessment tools that let teams check their own compliance. If a project passes automated checks and self-assessment, does it really need a board review? Maybe it just needs a board notification.
Embed expertise. Instead of projects coming to governance, governance goes to projects. AI assistants embedded in development workflows can flag issues in real-time, before they become expensive to fix.
Measure differently. Stop measuring governance by process compliance (did we follow the steps?) and start measuring by outcomes (did we ship good technology safely and quickly?). Time-to-approval is a metric that matters.
Maintain the human core. AI augmentation should free your best people to focus on what matters: novel situations, strategic decisions, genuine judgment calls. If AI handles 70% of the mechanical work, your experts can go deep on the 30% that requires expertise.
What Changes, What Doesn't
Some things should absolutely change:
Sequential processes become parallel. Architecture, security, and compliance reviews happen simultaneously, not in sequence.
Point-in-time reviews become continuous. Instead of a security review before launch, continuous monitoring throughout development.
Board meetings become exception handling. Routine approvals don't need twelve people in a room. Boards focus on exceptions, escalations, and strategic decisions.
Documentation becomes automated. Decision records, compliance checklists, and risk assessments are generated and maintained automatically.
Weeks become days. The 14-30 week pre-development governance timeline compresses to 2-4 weeks for most projects.
Some things should not change:
Human judgment on novel situations. AI can flag risks. Humans decide how to respond.
Accountability for outcomes. Someone is still responsible when things go wrong.
Principles over rules. Good governance is principle-based, not just rule-based. AI can check rules, but humans interpret principles.
Learning from failures. When things go wrong, the organisation learns and adapts. This requires human reflection, not just automated adjustment.
Ethical oversight. Especially for AI projects, human judgment on ethics, fairness, and societal impact remains essential.
The goal isn't to remove governance. It's to make governance fast enough to enable rather than prevent.
The Capable Governance Function
The organisations that get this right will have governance functions that are:
Fast. Days, not months. Parallel, not sequential. Continuous, not point-in-time.
Proportionate. Heavy scrutiny for high-risk decisions. Light touch for routine ones. No one-size-fits-all.
Embedded. Governance is part of how work happens, not something that happens to work.
AI-augmented. AI handles volume and pattern recognition. Humans handle judgment and exceptions.
Outcome-focused. Measured by what ships and how well, not by process compliance.
Trusted. Teams see governance as helpful, not hostile. They engage early because it makes things better, not because they have to.
This isn't a fantasy. It's what capable organisations are building right now. The tools exist. The patterns are emerging. The question is whether your governance functions will transform or be left behind.
Starting the Conversation
If you're a CIO, CTO, or enterprise architect reading this, here are questions worth asking:
What's our average time from project initiation to development start? If you don't know, that's the first problem.
What percentage of governance review is pattern-checking versus judgment? The pattern-checking portion is your AI augmentation opportunity.
When did we last retire a governance process? If the answer is "never" or "I don't know," governance is accumulating without pruning.
Do project teams see governance as helpful or hostile? Ask them. Their answers will tell you whether governance is enabling or preventing.
What would we do differently if governance took days instead of weeks? The answer reveals what's being suppressed by current timelines.
The enterprises that thrive in the AI era won't be those with the least governance. They'll be those whose governance moves at the speed of opportunity.
That's the transformation worth pursuing.
If you're a leader looking to transform how your organisation governs technology, including AI-augmented architecture review, security, and compliance, that's exactly what the AI Capability Intensive is designed for.
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedIn