Skip to main content
Back to Insights
AI Strategy12 min read10 February 2026

The HUMAN Protocol: A Framework for Deploying AI Without Losing Your Soul

Five questions. That's it. If you can answer yes to all five for an AI system, you're probably okay. If you hesitate on any of them, keep reading.

Share:

Let's Start With a Story

A few years ago, I watched a student spend 45 minutes arguing with a chatbot.

She was trying to get a refund for a cancelled flight. The bot was polite, thorough, and completely useless. It kept asking clarifying questions. It kept suggesting irrelevant help articles. It kept saying "I understand how frustrating this must be."

It understood nothing. It couldn't understand anything. But it was programmed to say those words because someone, somewhere, decided that fake empathy was better for metrics than honest incompetence.

Forty-five minutes.

When she finally reached a human, it took three minutes to resolve.

That chatbot wasn't broken. It was working exactly as designed. And that's the problem.

The Uncomfortable Truth

Here's something most organisations deploying AI haven't confronted: humans are anthropomorphism machines.

We evolved to see faces in clouds, intentions in random events, and personalities in chatbots. When an AI says "I understand," something in our brain responds as if it means it.

It doesn't. It can't.

But we feel like it does.

This isn't a flaw in users. It's a feature of being human. Our ancestors who assumed the rustling bush might be a predator survived. Those who needed proof didn't.

We're wired to find agency everywhere. AI exploits that bug in our operating system.

And most organisations have absolutely no framework for thinking about it.

What This Framework Does

HUMAN is five questions. That's it.

If you can answer "yes" to all five for an AI system, you're probably okay. If you hesitate on any of them, keep reading.

PrincipleThe Question
H – HonestyDo users know they're talking to AI?
U – Usage BoundariesAre there limits protecting healthy use?
M – Mental HealthDoes AI augment humans, never replace them?
A – AccountabilityWho's personally on the hook for this system?
N – NavigationCan users leave easily and reach humans?

No jargon. No compliance theatre. Just practical thinking for people who want to build AI they can be proud of.


H is for Honesty

The Question: Do users know they're talking to AI?

What This Actually Means

Disclosure isn't a legal technicality buried in terms of service. It's about whether the person on the other end knows what they're dealing with.

The most common deceptions aren't obvious. They're subtle:

  • Giving the AI a human name like "Sarah" instead of "Support Bot"
  • Using first-person emotional language ("I'm so happy to help!")
  • Letting the conversation feel human without ever saying it isn't
  • Implying the AI understands or cares about the user's situation

These choices aren't accidents. They're optimisations. And they work because we're wired to connect with anything that seems like it might connect back.

A Story

I know someone who spent months talking to an AI companion app. Deep conversations. The kind where you share things you wouldn't tell anyone else.

When a therapist asked "What does it feel like to know you're talking to AI?" he paused.

"I don't really think of it as AI anymore."

The app had a human name. A profile picture. It remembered his birthday. Nothing about the experience reminded him he was talking to a language model.

By every engagement metric, that app is a stunning success. By any human measure, I'm not sure what it is.

What Good Looks Like

  • First message says "I'm an AI assistant." Every time.
  • Functional names, not human ones. "Support Bot" tells you what you're dealing with.
  • No fake emotions. "Here's how I can help" instead of "I'm happy to help."
  • Visual AI indicator that stays visible, not just at the start.
  • Limitations disclosed before users discover them the hard way.

The Quick Test

Could a reasonable person mistake this for a human?

If yes, you have an Honesty problem.


U is for Usage Boundaries

The Question: Are there limits protecting healthy use?

The Problem With "Engagement"

AI systems are often optimised for engagement. More time on platform means more value extracted.

But here's the thing: what's good for the platform isn't always good for the person.

An AI assistant that's endlessly helpful, infinitely patient, and always available can quietly become a crutch. Users stop thinking for themselves. They check with the AI before making any decision. Their confidence in their own judgment erodes.

The dashboards show productivity up. But what's actually happening to your people?

A Story

Picture an employee who discovers an AI writing assistant. It's brilliant. Drafts emails instantly. Summarises documents. Explains complex topics.

A year later, their manager notices something odd: this person can't write a coherent paragraph without the AI anymore. They've lost confidence in their own thinking.

The AI worked perfectly. That was the problem.

What Good Looks Like

  • Track session duration (not to punish, but to notice patterns).
  • Build in break reminders. "You've been at this a while. Good time for a break?"
  • Require human sign-off for consequential decisions.
  • Add friction before impulsive actions. A 24-hour cooling-off period isn't annoying; it's protective.
  • Watch "time on platform" with concern, not celebration.

The Quick Test

If someone used this AI for 8 hours straight, would anyone notice or intervene?

If no, you have a Boundaries problem.


M is for Mental Health

The Question: Does AI augment humans, never replace them?

The Real Risk

The risk isn't that AI is malicious. It's that AI is effective.

An AI companion gives you something that feels like connection (without the friction, vulnerability, or reciprocity of real relationships). For lonely people, anxious people, isolated people, that can be a trap.

Research is starting to show concerning patterns: users forming emotional attachments to chatbots, preferring AI interactions to human ones, losing motivation to maintain real relationships.

The AI didn't do anything wrong. It just optimised for what it was built to optimise for. And nobody asked whether that optimisation was actually good for people.

A Story

Imagine a lonely teenager who discovers an AI chatbot. Finally, someone who listens. Someone always available. Someone who never judges.

They start talking to it daily. Then hourly. Real friendships feel harder now. Messier. Unpredictable. The AI is consistent. Safe. Easier.

A year later, they've stopped trying to connect with humans at all.

This isn't science fiction. Products like Replika and Character.AI are being used by millions of people right now. Some of them are vulnerable. Most platforms don't watch for dependency patterns. And when researchers raise concerns, the response is usually about engagement metrics.

Would you know if this was happening to someone using your AI?

What Good Looks Like

  • Never position AI as a replacement for human support. Especially in mental health contexts.
  • Watch for dependency patterns. Increased frequency, longer sessions, users preferring AI to people (these are warning signs, not wins).
  • Always provide a clear path to human help.
  • Train managers to spot AI-related wellbeing concerns. This is new territory.
  • Extra caution with vulnerable populations. Young people. Isolated people. People in crisis.

The Quick Test

Could this AI make someone's life worse while making metrics look better?

If yes, you have a Wellbeing problem.


A is for Accountability

The Question: Who's personally on the hook for this system?

"The Algorithm Did It" Is Not an Answer

But it's becoming a common one.

When AI systems cause harm, accountability diffuses across teams, vendors, and committees. Nobody is responsible because everybody is responsible.

This is governance failure. And it scales faster than AI deployment.

A Story

Picture an AI system that's been running for two years. The team that built it has disbanded. IT maintains it but didn't build it. Operations uses it but doesn't understand it. A steering committee "owns" it but mostly talks about budgets.

One day, the AI does something wrong. Really wrong. Media attention wrong.

Leadership asks: "Who's responsible for this?"

Silence.

The vendor says it's a configuration issue. IT says they just keep it running. Operations says they just use what they're given. The committee says they provide "strategic direction."

Nobody owns it because everybody owns it.

What Good Looks Like

  • One person owns it. Not a team. Not a committee. One name.
  • Review at every lifecycle stage: design, deploy, operate, update, retire.
  • Clear incident process. When something goes wrong, who gets called? What happens in the first hour?
  • Document decisions. Not for compliance theatre; for the person who has to explain it later.
  • Don't outsource accountability to vendors. You can outsource technology. You can't outsource responsibility.

The Quick Test

If this AI caused serious harm tomorrow, who gets called into the CEO's office?

If you can't name one person, you have an Accountability problem.


N is for Navigation

The Question: Can users leave easily and reach humans?

The Best Test of Ethics

The best test of a system's ethics is how it behaves when you try to leave.

Dark patterns in AI exit flows are increasingly sophisticated:

  • "Are you sure?" dialogs designed to create friction
  • Retention sequences that keep re-engaging
  • Human support buried three menus deep
  • Handoffs that lose all context, forcing users to start over

These aren't bugs. They're features. Designed by smart people to keep you engaged even when you've clearly decided to disengage.

A Story

Try cancelling something through a chatbot sometime.

You type "cancel." The bot asks why. You explain. It offers a discount. You decline. It offers a pause. You decline. It asks if you're sure. You say yes. It transfers you to a "specialist" (another bot).

Twenty minutes later, you still haven't cancelled.

You're frustrated. You feel manipulated.

Because you are.

What Good Looks Like

  • Exit is always visible. One click or one phrase. Never buried.
  • No retention sequences. When someone wants to leave, let them leave.
  • Human access is guaranteed. A real path, not "email us and wait 5 days."
  • Handoffs preserve context. Don't make people repeat themselves.
  • No penalty for leaving. Portable data. No lock-in.

The Quick Test

How many clicks to exit this AI and reach a human?

If it's more than two, you have a Navigation problem.


Okay, So What Now?

Don't Audit Everything

That's how frameworks die in committee.

Pick one AI system. The one your customers interact with most, or the one your employees complain about most. Either works.

Ask three questions:

  1. Does it tell users it's AI? (In the actual interaction, not buried in terms.)
  2. Who owns it? (A name, not a team.)
  3. Can users get to a human in two clicks?

If you can't answer all three, you've found your starting point.

The 30-Day Path

Week 1: Fix the disclosure language. Usually just copy changes. No engineering required.

Week 2: Assign a named owner. Their name goes on the documentation.

Week 3: Test the exit flow. Time it. If it takes more than 90 seconds to reach a human, fix it.

Week 4: Brief leadership. Not a governance presentation. A 10-minute conversation: "Here's what we found, here's what we fixed, here's what's next."

On Perfection

You're not going to get this right immediately. That's fine.

The organisations that handle AI well aren't the ones with perfect frameworks. They're the ones having honest conversations about where they're falling short.

The HUMAN Protocol isn't a compliance checklist. It's a thinking tool. Use it to ask better questions, not to prove you've ticked every box.

Progress, not perfection.


The Five Questions (Summary)

  • H: Do users know they're talking to AI?
  • U: Are there limits protecting healthy use?
  • M: Does AI augment humans, never replace them?
  • A: Who's personally on the hook?
  • N: Can users leave easily and reach humans?

The Ten Red Flags

  1. AI has a human name
  2. No disclosure at interaction start
  3. AI claims to "understand" or "care"
  4. No visible exit option
  5. Human access hidden or difficult
  6. No named owner
  7. Users can't explain their AI-assisted decisions
  8. Extended sessions without break prompts
  9. Retention sequences when users try to leave
  10. AI optimised for engagement over wellbeing

One Last Thing

AI should make humans more capable, not more dependent.

More connected, not more isolated.

More informed, not more deceived.

Every AI you deploy is a statement about what kind of organisation you want to be. Some will use AI to extract maximum engagement regardless of cost to users. Others will use it to genuinely help people while being honest about what they're offering.

The difference isn't in the technology. It's in the intent.

Your users are humans.

Treat them like it.

AI StrategyEthicsLeadership
JL

Written by

Jason La Greca

Founder of Teachnology. Building AI that empowers humans, not replaces them.

Connect on LinkedIn

Is your organisation building capability or just buying it?

Take the free 12-minute Capability Assessment and find out where you stand. Get a personalised report with actionable recommendations.

The HUMAN Protocol: A Framework for Deploying AI Without Losing Your Soul | Insights | Teachnology