China just did something that should make every enterprise AI leader pay attention.
They released draft legislation specifically addressing AI anthropomorphism: the "Interim Measures for the Administration of Humanised Interactive Services Based on AI." Whilst most Western commentary has either ignored it or dismissed it as authoritarian overreach, there's something genuinely useful buried in this law.
I first encountered this through Dr. Luiza Jarovsky's excellent analysis in her newsletter. Luiza is one of the world's leading voices on AI governance, with over 88,000 subscribers to her newsletter and pioneering work on AI ethics, dark patterns, and manipulation. Her point is sharp: this proposed Chinese law "offers a real-world example of a legal framework that acknowledges AI-related human vulnerabilities and proposes contextual technical measures to prevent AI-anthropomorphism-related harm."
She's right. And I want to take her analysis one step further: what can organisations do today to implement these principles, without waiting for regulation?
It's the most practical, specific, and implementable AI safety framework I've seen from any government.
Not abstract principles. Not vague guidelines. Actual, concrete requirements that any organisation could implement starting today.
I'm not suggesting you adopt Chinese law. I'm suggesting you steal the good ideas and build your own internal framework before regulation forces you to.
What China Got Right
As Luiza notes in her analysis, most AI governance frameworks, including the EU AI Act and various US state laws, are "overly abstract and contextually vague, leaving the door open for AI companies to do the least possible, deploy dark patterns, avoid compliance, and abuse the system."
China's proposed law is different. It includes specific, actionable requirements:
2-hour usage reminders. If a user interacts with an anthropomorphic AI continuously for more than 2 hours, the system must prompt them to take a break.
Clear exit mechanisms. Users must be able to exit AI interactions easily through buttons, keywords, or other means. The AI cannot prevent voluntary exit.
Transparent AI identification. Users must be clearly informed they're interacting with AI, not a human.
Vulnerable population protections. Specific requirements for minors (parental controls, usage limits) and elderly users (emergency contacts, wellbeing monitoring).
Dependency risk warnings. Systems must include "mental health protection, emotional boundary guidance, and dependency risk warning" capabilities.
Prohibited design goals. AI systems cannot be designed with the goal of "replacing social interaction, controlling users' psychology, or inducing addiction."
Lifecycle accountability. Providers are responsible for security and safety throughout the entire lifecycle: design, operation, upgrade, and termination.
Whether or not you agree with every provision, this is a level of specificity that Western frameworks lack entirely. And specificity is what makes governance actually implementable.
Why This Matters for Enterprise AI
You might be thinking: "This is about consumer AI companions like Replika and Character.ai. We're deploying enterprise tools. This doesn't apply to us."
Think again.
Your organisation is almost certainly deploying AI systems that exhibit anthropomorphic characteristics:
- Customer service chatbots with names, personalities, and conversational styles designed to feel human
- Internal AI assistants that employees interact with daily, often for hours at a time
- AI-powered coaching or learning tools that build ongoing relationships with users
- Sales and marketing AI designed to build rapport and influence decisions
- HR and recruitment tools that interact with candidates and employees in human-like ways
The research is clear: humans readily anthropomorphise AI systems, even simple ones. We attribute emotions, intentions, and understanding to chatbots. We form attachments. We trust them in ways that may not be warranted.
This creates real risks:
Overtrust. Employees defer important decisions to AI systems, assuming capabilities that don't exist.
Manipulation vulnerability. Anthropomorphic AI is more effective at extracting personal information and influencing behaviour.
Emotional dependency. Extended interaction with AI systems can create unhealthy attachment patterns.
Expectation mismatch. When AI fails to meet the human-like expectations it creates, the disappointment is amplified.
Judgment erosion. If employees rely on AI for cognitive work without developing evaluation skills, organisational capability atrophies.
These aren't hypothetical risks. They're documented in research and increasingly visible in practice.
The HUMAN Protocol
Based on China's framework and the broader research on AI anthropomorphism, I've developed a practical protocol that any organisation can implement immediately.
I call it the HUMAN Protocol: six categories of safeguards that protect your people and your organisation whilst still enabling AI innovation.
The HUMAN Protocol
- H - Honesty About AI Identity
- U - Usage Boundaries and Breaks
- M - Mental Health and Wellbeing Safeguards
- A - Accountability Throughout Lifecycle
- N - Navigation and Exit Controls
Let me break down each component with specific, implementable actions.
H - Honesty About AI Identity
The Principle: Users must always know they're interacting with AI, not a human.
Why It Matters: Research shows that humans behave differently when they believe they're talking to another human versus a machine. Deception, even well-intentioned deception, creates trust problems and can amplify negative reactions when the truth emerges.
Implement This Week:
- Audit all AI touchpoints. List every place in your organisation where humans interact with AI systems. Customer service, internal tools, HR, sales, learning platforms, everything.
- Add clear AI identification. Every AI interaction should include an unambiguous statement that the user is interacting with AI. Not buried in terms of service. Visible at the point of interaction.
- Name AI systems appropriately. Avoid human names that create false intimacy. "Alex" or "Sarah" creates different expectations than "Support Assistant" or "Research Tool."
- Disclose capabilities honestly. Don't imply understanding, empathy, or memory that doesn't exist. If the AI doesn't remember previous conversations, say so.
Example Implementation:
"You're chatting with [Company] AI Assistant. I can help with common questions, but I'm an AI system, not a human. I don't remember our previous conversations, and complex issues may need human support."
U - Usage Boundaries and Breaks
The Principle: Extended continuous AI interaction creates risks. Build in natural breaks and boundaries.
Why It Matters: China's law requires a reminder after 2 hours of continuous use. This isn't arbitrary. Research on AI companion apps shows that extended sessions correlate with dependency formation and emotional entanglement. The same dynamics apply to workplace AI.
Implement This Week:
- Track session duration. Add monitoring to understand how long employees are interacting with AI systems in single sessions.
- Implement gentle break reminders. After 90-120 minutes of continuous AI interaction, prompt users to take a break. This isn't paternalistic; it's good cognitive hygiene.
- Set daily usage guidelines. Establish recommended maximum daily AI interaction times for different use cases. Not hard limits, but norms that signal healthy usage.
- Create "human checkpoint" requirements. For certain decision types or after certain durations, require human consultation before proceeding.
Example Implementation:
"You've been working with AI Assistant for 2 hours. Consider taking a short break, or connecting with a colleague to discuss your work."
M - Mental Health and Wellbeing Safeguards
The Principle: AI systems should support human wellbeing, not undermine it.
Why It Matters: China's law prohibits designing AI with goals of "replacing social interaction, controlling users' psychology, or inducing addiction." This is a profound statement about what AI should and shouldn't do. Your organisation should have similar principles.
Implement This Week:
- Define prohibited design patterns. Explicitly ban AI designs intended to maximise engagement at the expense of user wellbeing. No dark patterns. No artificial urgency. No manipulation.
- Add wellbeing check-ins. For AI systems used extensively, periodically prompt users to reflect on whether the tool is serving them well.
- Create escalation pathways. If AI detects signs of distress, frustration, or problematic usage patterns, it should offer human support options.
- Train managers on AI-related wellbeing. Help managers recognise signs that team members may be over-relying on AI or developing unhealthy usage patterns.
- Establish a "no replacement" principle. AI should augment human connection and judgment, not replace it. Build this into your AI design principles.
Example Implementation:
Design principle: "Our AI tools are designed to enhance human capability and judgment, never to replace human connection or create dependency. We will not optimise for engagement metrics at the expense of user wellbeing."
A - Accountability Throughout Lifecycle
The Principle: Someone must be responsible for AI safety at every stage: design, deployment, operation, and retirement.
Why It Matters: China's law requires "security responsibilities throughout the entire lifecycle." Most organisations have no clear ownership of AI safety. This creates gaps where problems emerge but nobody is accountable.
Implement This Week:
- Assign AI safety owners. Every AI system should have a named individual responsible for its safe operation. Not a committee. A person.
- Create lifecycle checkpoints. At each stage (design, pilot, deployment, operation, update, retirement), require explicit safety review and sign-off.
- Establish incident reporting. Create a clear process for reporting AI-related concerns, near-misses, and incidents. Make it psychologically safe to report.
- Conduct regular safety reviews. Quarterly, review each AI system's performance against safety criteria. Are users trusting it appropriately? Are there signs of problematic patterns?
- Plan for termination. If you need to retire an AI system, how will you manage users who have developed reliance on it? Plan this before deployment.
Example Implementation:
"AI System Safety Card: Owner: [Name]. Last safety review: [Date]. Next scheduled review: [Date]. Incident reports (last 90 days): [Number]. Status: [Green/Yellow/Red]."
N - Navigation and Exit Controls
The Principle: Users must always be able to easily disengage from AI interactions and access human support.
Why It Matters: China's law requires "convenient exit methods" and prohibits AI from preventing voluntary exit. This matters because AI systems can create friction that keeps users engaged even when they'd prefer human help.
Implement This Week:
- Add clear exit options. Every AI interaction should have a visible, easy way to end the conversation or switch to human support.
- Remove exit friction. Audit your AI systems for patterns that discourage disengagement. "Are you sure you want to leave?" prompts. Multi-step exit processes. Guilt-inducing language.
- Guarantee human access. Users should always be able to reach a human within a reasonable timeframe. AI should never be the only option for important matters.
- Respect exit decisions. If a user chooses to exit or requests human help, honour that immediately. No persuasion attempts. No "let me try one more thing."
- Track exit patterns. Monitor when and why users disengage from AI. High exit rates or frustrated exits are signals that something needs attention.
Example Implementation:
Every AI interface includes: [End Conversation] button always visible. "Talk to a Human" option prominently displayed. No confirmation dialogues on exit. Immediate handoff when human support requested.
The 5-Day Implementation Sprint
Here's how to implement the HUMAN Protocol in your organisation this week:
Day 1: Audit
- List all AI systems with human interaction
- Identify which have anthropomorphic characteristics
- Note current safeguards (or lack thereof)
Day 2: Honesty
- Draft AI identification language for each system
- Review AI naming conventions
- Create capability disclosure standards
Day 3: Usage & Navigation
- Define session duration thresholds
- Design break reminder mechanisms
- Audit exit friction and plan removal
- Ensure human escalation paths exist
Day 4: Mental Health & Accountability
- Draft prohibited design patterns policy
- Assign AI safety owners for each system
- Create incident reporting process
- Schedule first quarterly safety reviews
Day 5: Communicate & Launch
- Brief leadership on the HUMAN Protocol
- Communicate changes to employees
- Set implementation timeline for technical changes
- Schedule 30-day review to assess progress
Why Do This Now?
Three reasons:
1. Regulation is coming. China's law is a preview. The EU AI Act is already in effect with specific requirements for high-risk AI. US states are moving. Australia is developing frameworks. If you wait for regulation to force compliance, you'll be scrambling. If you build capability now, you'll be ahead.
2. Your people deserve protection. The risks of AI anthropomorphism are real. Overtrust, manipulation, dependency, judgment erosion. These affect your employees, your customers, and your organisation's capability. The HUMAN Protocol isn't bureaucracy; it's care.
3. This is what capable organisations do. The theme of everything we've been discussing: capable organisations build judgment, accountability, and the ability to evaluate what's actually happening. They don't wait for others to tell them what to do. They develop internal capability to navigate complex situations.
Implementing the HUMAN Protocol is an exercise in exactly this kind of capability building. You're not just following rules. You're developing the judgment to deploy AI responsibly.
The Bigger Picture
China's AI anthropomorphism law isn't perfect. It includes provisions about "core socialist values" and national security that won't translate to other contexts. It's embedded in a political system very different from Western democracies.
But it also represents something important: a government taking AI's psychological impact on humans seriously enough to create specific, actionable requirements.
Most Western AI governance is still stuck in abstraction. "Be transparent." "Ensure fairness." "Maintain human oversight." These are fine as principles, but they don't tell you what to actually do Monday morning.
The HUMAN Protocol does.
Honesty about AI identity. Usage boundaries. Mental health safeguards. Accountability throughout lifecycle. Navigation and exit controls.
Five categories. Specific actions. Implementable this week.
This is what building organisational capability looks like. Not waiting for someone else to tell you what to do. Developing the judgment to do what's right, and the discipline to actually do it.
Your AI systems are already deployed. Your people are already interacting with them. The question isn't whether to govern this responsibly.
The question is whether you'll lead or be led.
For deeper analysis of AI governance, regulation, and the legal/ethical implications of AI, I highly recommend subscribing to Luiza Jarovsky's newsletter . Her work on AI anthropomorphism, dark patterns, and manipulation is essential reading for anyone deploying AI in their organisation.
If you're a leader who wants to build real AI capability in your organisation, including the judgment to evaluate tools, the frameworks to deploy responsibly, and the ability to lead AI initiatives rather than just approve them, that's what the AI Capability Intensive is for.
Written by
Jason La Greca
Founder of Teachnology. Building AI that empowers humans, not replaces them.
Connect on LinkedIn