Your AI Agent Doesn’t Need More Intelligence. It Needs the Right Level of Autonomy.

Most organizations deploying AI agents are asking the wrong question.

They ask: How capable is this agent? They should ask: How much autonomy should we give it?

McKinsey reports that 62% of organizations are experimenting with agentic AI. Yet only 6% fully trust AI agents to run core business processes. That gap isn’t a technology problem. It’s an organizational design problem.

After studying this question in depth, drawing on cases from Mayo Clinic, Ocado, Meta, Cruise, and Uber, and grounded in 19th-century social theory, I propose a framework that any leadership team can use today.

The four levels of AI agent autonomy

Not every AI agent should have the same degree of freedom. Building on the taxonomy developed by Ross and Taylor (2021) in Harvard Business Review, there are four levels:

๐Ÿ. ๐‡๐ฎ๐ฆ๐š๐ง ๐ข๐ง ๐ญ๐ก๐ž ๐‹๐จ๐จ๐ฉ (HITL) : The agent recommends; a human decides. Think AI assisted radiology at Mayo Clinic. The algorithm flags findings, but the radiologist makes the call.

๐Ÿ. ๐‡๐ฎ๐ฆ๐š๐ง ๐ข๐ง ๐ญ๐ก๐ž ๐‹๐จ๐จ๐ฉ ๐Ÿ๐จ๐ซ ๐„๐ฑ๐œ๐ž๐ฉ๐ญ๐ข๐จ๐ง๐ฌ (HITLFE) : The agent handles routine work autonomously and escalates edge cases. Many banks and telecoms already operate this way for customer service.

๐Ÿ‘. ๐‡๐ฎ๐ฆ๐š๐ง ๐จ๐ง ๐ญ๐ก๐ž ๐‹๐จ๐จ๐ฉ (HOTL) : The agent acts; humans review after the fact and can override. Meta’s content moderation works this way. It’s the only scalable option at that volume but it only works with robust appeals, error measurement, and clear accountability.

๐Ÿ’. ๐‡๐ฎ๐ฆ๐š๐ง ๐จ๐ฎ๐ญ ๐จ๐Ÿ ๐ญ๐ก๐ž ๐‹๐จ๐จ๐ฉ (HOOTL) :.  Full autonomy. Ocado’s robotic fulfillment centers operate this way. Thousands of robots pick, pack, and ship groceries with zero human intervention.

Knowing these levels exist isn’t enough. The real question is: which level fits which situation?

The decision criterion most leaders are missing

Here’s where it gets interesting. I ground the decision in a distinction made by Auguste Comte and Friedrich Engels in the 19th century: the difference between the administration of things and the government of persons.

Administration of things is about managing objects, data, inventory, logistics. Predictable, objective, driven by engineering.

Government of persons is about decisions that affect human lives, rights, and welfare. Complex, subjective, driven by judgment.

The principle is simple:

When an AI agent administers things with no human impact, give it full autonomy (HOOTL). Warehouse robots, network traffic management, inventory optimization.

When an AI agent governs persons with severe consequences, keep humans firmly in control (HITL). Criminal sentencing, medical diagnosis, military targeting.

Everything in between requires calibration. And that is where most organizations struggle.

The dangerous middle ground

The Cruise robotaxi incident in San Francisco is a cautionary tale. A pedestrian was struck by another vehicle and thrown into the robotaxi’s path. The Cruise vehicle then executed an automated maneuver that dragged the pedestrian 20 feet. It was operating as HOOTL in a context that demanded, at minimum, HOTL with clear escalation triggers and override authority.

The lesson is not “avoid autonomy.” It’s calibrate it based on whether the agent administers things or governs persons, and on the severity of its human externalities.

Five implementation contexts

Crossing these two dimensions yields five contexts, each matched to a recommended autonomy level:

DomainHuman ImpactRecommended AutonomyExamples
Administration of ThingsNeutralHOOTL (full autonomy)Warehouse robotics (Ocado), network traffic management, inventory optimization
Administration of ThingsModerate negative externalitiesHOTL (human reviews after the fact)Food quality control, predictive maintenance
Administration of ThingsSevere negative externalitiesHITL (human decides)Autonomous vehicles (cf. Cruise, Uber), aircraft systems (cf. Boeing MCAS)
Government of PersonsModerate negative externalitiesHOTL /HITLFECustomer service with escalation, visa processing, consumer credit decisions
Government of PersonsSevere negative externalitiesHITL (human decides)Medical diagnosis (Mayo Clinic), criminal sentencing, military targeting

One more thing: autonomy is not static

The four levels can also be stages of maturation. An agent starts as a copilot (HITL), earns more freedom through demonstrated reliability (HOTL or HITLFE), and may eventually reach full autonomy (HOOTL) but only if the implementation context permits it.

Growing reliability does not automatically justify growing autonomy. When human lives are at stake, the ceiling on autonomy is set by the context, not by the technology.

What should your organization do now?

Three immediate steps:

First, audit your current and planned AI agents. Map each one to the five contexts above. You may discover that some agents already have more autonomy than they should.

Second, establish an AI Governance Board. This isn’t optional. As agentic AI scales from dozens to hundreds of agents across your organization, you need a dedicated body, operating under top management authority, with a strategic mission (what autonomy, where), a compliance mission (are we legal?), and a transparency mission (can we prove we’re doing no harm?).

Third, watch for Aron’s warning. Raymond Aron observed that the administration of things can easily become a mode of government of persons. Amazon’s warehouse operations illustrate this: what looks like pure logistics optimization simultaneously functions as worker surveillance. Your AI agents designed to manage things must not become, by design or by drift, instruments for controlling people.

This article draws on a working paper “How Much Autonomy Should Organizations Build in AI Agents?” If you’d like to explore how this framework applies to your organization’s AI strategy, I work with leadership teams on exactly this question. Details on my website : bouchikhi.pro

Leave a comment

Your email address will not be published. Required fields are marked *