Skip to main content
Back to Insights
AI Strategy5 min read29 January 2026

Why the Fastest-Growing AI Company Stopped Writing Rules

Anthropic tried governing AI with rules. It didn't work. Their new constitution teaches Claude why things matter, not just what to do. The result? 32% of enterprise LLM usage. Your organisation's AI governance might be learning the wrong lesson.

Share:

The most interesting thing happening in AI right now isn't a new model.

It's a philosophy.

Last month, Anthropic published an 80-page document they call "Claude's Constitution." While every other AI company is shipping features and running Super Bowl ads, Anthropic's lead philosopher (yes, they have one) was writing what amounts to an ethical framework for how their AI should think.

Not what it should do. How it should think.

The difference matters more than most people realise.

Rules vs Judgement

Most organisations govern AI the same way they govern everything else: with rules.

"Don't do X. Always do Y. If Z happens, escalate to committee."

Anthropic tried that. It didn't work. Their first constitution was a collection of existing documents: the Universal Declaration of Human Rights, Apple's terms of service, various anti-harm guidelines. Follow these rules, Claude.

The 2026 version is completely different. Instead of rules, it teaches Claude why certain things matter. Instead of "don't help with harmful requests," it explains the reasoning behind harm prevention and asks Claude to exercise independent judgement.

The result? Claude now holds 32% of enterprise LLM usage. Enterprises are choosing judgement over rules.

This Isn't Just About AI

I've spent the last decade watching organisations try to govern technology with rules. Architecture review boards. Change advisory boards. AI ethics committees. Security approval workflows.

Every single one follows the same pattern: write rules, enforce compliance, wonder why nothing gets built.

The organisations that actually build capability do something different. They create conditions. Guardrails, not gates. Boundaries within which people (and now AI) can exercise judgement.

Anthropic just proved this at the frontier of AI. Their constitutional approach outperforms rules-based governance not because it's more permissive, but because it's more robust.

When you teach why, people (and systems) handle novel situations correctly. When you only teach what, they fail the moment something unexpected happens.

The Parallel to Your Organisation

If you're running an AI governance framework built on rules and approval processes, you're building the 2023 version of Anthropic's constitution. The one they replaced.

The question isn't whether you have rules. It's whether the people building with AI in your organisation understand why those boundaries exist.

Because the ones who understand "why" will stay within the guardrails even when nobody's watching. The ones who only know "what" will route around your governance the moment it slows them down.

Anthropic bet the company on this philosophy. The data says they were right.

And they're not alone. This week, Google launched an MCP server that gives AI agents standardised access to their developer documentation. Not a permission system. A protocol with clear boundaries. Any agent that speaks the protocol can access the knowledge.

Anthropic built the philosophy. Google built the infrastructure. Both chose guardrails over gates.

What's your organisation betting on?


I write about building organisational AI capability. If your governance framework feels more like a gate than a guardrail, that's worth examining.

AI StrategyGovernanceLeadershipEnterprise
JL

Written by

Jason La Greca

Founder of Teachnology. Building AI that empowers humans, not replaces them.

Connect on LinkedIn

Is your organisation building capability or just buying it?

Take the free 12-minute Capability Assessment and find out where you stand. Get a personalised report with actionable recommendations.

Why the Fastest-Growing AI Company Stopped Writing Rules | Insights | Teachnology