Share this
How Much Autonomy Should AI Agents Have?
by Marc de Haas on Mar 16, 2026 7:15:00 AM

This is Part 3 of our Grounded, Guarded, Governed series on building trustworthy agentic systems. Read Part 1 on guardrails here, and Part 2 on observability here.
As AI agents become more autonomous, the question is no longer just what they can do. It is how far they should go without a human stepping in. Teams that scale agentic AI responsibly treat governance as part of the operating model, not an afterthought. Guardrails and observability create structure and visibility. But even the most transparent system still needs clear accountability.
Agentic AI delivers speed and consistency, but not every decision should be automated. Trustworthy systems rely on safety, transparency, and human oversight. This blog focuses on the third pillar, defining where automation stops and judgement begins.
In the sections below, we look at where a human must stay in the loop when designing AI systems, which actions should never be fully automated, and how approval flows can align with existing workflows.
Keeping a Human-in-the-Loop
Some actions still need judgement calls. In areas like finance, policy, or compliance, human approval is critical. Agents can prepare or recommend, but experts (people) decide. It’s a hybrid model: automation for speed and consistency, and human oversight for context and accountability. That balance is what keeps the systems reliable.
Where Automation Must Stop
Some actions should never be fully automated. Full stop.
If an action affects end users, money, policy, or contractual obligations, it needs explicit human approval. That includes certain legally-binding customer communication, financial changes, access rights, policy decisions, or anything that is expensive or painful to undo. Even if an agent can technically do the work, it should not be the final authority.
Agents are most effective when their level of autonomy matches the level of risk. For low-risk or well-defined tasks, with proper guardrails in place, agents can act directly. They do not need to stop at recommendations. They can execute safely within clearly scoped boundaries.
As the impact increases, so should human involvement. In higher-risk scenarios, agents collect context, surface options, flag risks, and prepare a recommendation, but the final decision remains with a human accountable for the outcome.
Review Checkpoints in Agent Workflows
Human-in-the-loop does not mean slowing everything down. In well-designed systems, agents do most of their work before a human ever needs to look at something. They monitor signals, detect issues, and turn requests into structured proposals with the relevant context already attached.
In managed services setups, this often looks like an agent creating or enriching a ticket. The background is there. The recommendation is there. The risks are already called out. A domain expert reviews it, makes a call, and moves on.
The agent stays in the loop after approval. It can answer follow-up questions, apply changes, or execute the approved action. Humans stay in control without doing repetitive setup work.
Approvals Within Existing Permission Models
There is no universal approval flow that works for every organisation, and trying to force one usually backfires. Some teams approve actions in dashboards. Others rely on ticketing systems like Jira. Faster-moving teams may handle approvals in Slack, with the agent explaining what is being requested and why.
What matters here is that the agent fits into how teams already operate. Approval should feel like a natural step in an existing permission model, not a new process people have to work around. When the approvals align with real workflows, they add control without friction.
Controlled Autonomy Scales
Human oversight is often treated like a temporary workaround. In practice, it is what makes autonomy sustainable. When teams are explicit about where judgement is required, systems behave more predictably. Agents move fast where speed is safe and slow down where responsibility matters.
That clarity protects the organisation, the people running the system, and the customers affected by its decisions. Accountability stays visible. Nothing disappears into “I don’t know, the system just decided.” That is what allows autonomy to scale without turning into chaos.
Building for Control, Not Chaos
Agentic AI will continue to evolve rapidly. The question is not how to stop it, but how to design systems that remain governable as autonomy increases.
By combining clear guardrails, strong observability, and deliberate human oversight, organisations can benefit from automation without losing accountability. Trust is not built through technology alone. It is built through transparency, explicit control, and shared responsibility.
Discover More in Our Podcast: Taming the Agent
Hear our full conversation on designing guardrails, observability, and human oversight in the first episode of our podcast, Crystal Clear. The two hosts, Kevin and Marc, discuss how enterprises can move beyond proof-of-concept AI and build agentic systems that are capable and controlled.
Listen Now
Share this
- March 2026 (4)
- February 2026 (4)
- January 2026 (2)
- December 2025 (2)
- November 2025 (2)
- October 2025 (2)
- September 2025 (3)
- August 2025 (2)
- July 2025 (1)
- June 2025 (1)
- April 2025 (4)
- February 2025 (2)
- January 2025 (3)
- December 2024 (1)
- November 2024 (5)
- October 2024 (2)
- September 2024 (1)
- August 2024 (1)
- July 2024 (4)
- June 2024 (2)
- May 2024 (1)
- April 2024 (4)
- March 2024 (2)
- February 2024 (2)
- January 2024 (4)
- December 2023 (1)
- November 2023 (4)
- October 2023 (4)
- September 2023 (4)
- June 2023 (2)
- May 2023 (2)
- April 2023 (1)
- March 2023 (1)
- January 2023 (4)
- December 2022 (1)
- November 2022 (4)
- October 2022 (3)
- July 2022 (1)
- May 2022 (2)
- April 2022 (2)
- March 2022 (5)
- February 2022 (2)
- January 2022 (5)
- December 2021 (5)
- November 2021 (4)
- October 2021 (2)
- September 2021 (1)
- August 2021 (3)
- July 2021 (4)
- May 2021 (2)
- April 2021 (1)
- February 2021 (2)
- December 2020 (1)
- October 2020 (2)
- September 2020 (1)
- August 2020 (2)
- July 2020 (2)
- June 2020 (1)
- March 2020 (1)
- February 2020 (1)
- January 2020 (1)
- November 2019 (3)
- October 2019 (2)
- September 2019 (3)
- August 2019 (2)
- July 2019 (3)
- June 2019 (4)
- May 2019 (2)
- April 2019 (4)
- March 2019 (2)
- February 2019 (2)
- January 2019 (4)
- December 2018 (2)
- October 2018 (1)
- September 2018 (2)
- August 2018 (2)
- July 2018 (1)
- May 2018 (2)
- April 2018 (4)
- March 2018 (5)
- February 2018 (1)
- January 2018 (3)
- November 2017 (2)
- October 2017 (2)




