A No-Nonsense Guide to Governing AI Agents in Your Organisation
The year is 2010 and AWS cloud is all the rage:
- Everyone says they're doing it,
- Few really are,
- And those that are, are not doing it well!
Now lets fast forward to the year 2026 and it's the same story, but with AI Agents...
I had a conversation recently with a security engineer who casually mentioned their development team had deployed "a few AI agents" to handle code reviews and ticket triage. When I asked how many, he said "maybe twelve." When I asked what access they had, he went quiet. When I asked who was responsible for them, he went quieter.
This is the state of agentic AI governance in most organisations right now. And honestly? It reminds me of the early days of cloud adoption. Everyone was spinning up instances, nobody was tracking them, and security was playing catch-up for years. Except this time, the things we're deploying can make autonomous decisions. So, you know, no pressure.
What are AI agents and why should you care?
Let's cut through the hype for a moment. An AI agent, in practical terms, is a piece of software that can take actions on behalf of a user or a process without someone explicitly telling it what to do at each step. It observes, decides, and acts. That might be triaging support tickets, writing and deploying code, processing invoices, or interacting with other agents.
The difference between an AI agent and a traditional automation script is judgment. A script does what you told it. An agent decides what to do based on context. That distinction matters enormously from a security perspective because it means you can't predict exactly what it will do in every situation.
Gartner reckons 33% of enterprise applications will include agentic AI by 2028. From what I'm seeing on the ground, that feels conservative.
The governance problem in plain English
Here's the fundamental issue: we've spent decades building identity and access management frameworks for humans. We know how to onboard a person, give them appropriate access, monitor what they do, and revoke their access when they leave. It's not perfect, but there's a playbook.
For AI agents? Most organisations are winging it. And the risks are real:
- Privilege creep on steroids. An agent deployed to "help with admin tasks" gradually accumulates access to systems it was never intended to touch. Unlike a human, it won't question why it suddenly has access to the finance database.
- Shadow agents. Teams deploy agents without telling security. Sound familiar? It's shadow IT all over again, but with autonomous decision-making capabilities.
- Emergent behaviour. When multiple agents interact, they can produce outcomes nobody predicted or intended. This isn't science fiction, it's happening now in organisations with interconnected automation.
- Accountability gaps. When an agent makes a decision that causes a breach or a compliance violation, who's responsible? The developer who built it? The team that deployed it? The vendor who provided the model?
A practical governance framework
I'm not going to give you a 47-page policy document. Instead, here are the five things you actually need to get right, in order of priority.
1. Treat every agent as an identity
This is the non-negotiable starting point. Every AI agent in your environment needs a formal identity, just like every employee and every service account. That means:
- A unique identifier tied to a specific purpose
- A documented owner (a human, not another agent)
- A classification of what it's allowed to do and what data it can access
- A lifecycle: creation, review, modification, decommissioning
If you can't tell me how many AI agents are running in your environment right now, who owns them, and what access they have, then you have rogue undocumented employees within your company.
2. Scope privileges like you mean it
Least privilege isn't a new concept, but applying it to AI agents requires a different mindset. Humans have roles that are relatively stable. Agents have tasks that can be dynamic and context-dependent.
The approach I'd recommend:
- Start with zero access. Agents get nothing by default.
- Define action boundaries, not just data boundaries. It's not enough to say an agent can access the CRM. You need to specify what operations it can perform. Can it read? Write? Delete? Export?
- Set blast radius limits. What's the maximum damage this agent could do in a given time window? If it goes rogue, what's the worst-case scenario? Design constraints around that.
- Implement rate limiting. An agent that suddenly starts making 10,000 API calls when it normally makes 100 is either broken or compromised. Either way, you want to catch it.
3. Build observability from day one
You cannot govern what you cannot see. This sounds obvious, but most organisations deploying AI agents have minimal logging of what those agents are actually doing.
You need:
- Decision logging. Not just what the agent did, but why it decided to do it. The reasoning chain matters for incident investigation and compliance.
- Interaction tracking. When agents communicate with other agents or systems, log those interactions. Agent-to-agent conversations can produce unexpected outcomes.
- Anomaly detection. Baseline what normal behaviour looks like for each agent and alert on deviations. This is your canary in the coal mine.
- Regular reviews. Just as you'd review user access periodically, review agent access and behaviour. Are they still doing what they were deployed to do? Have they drifted?
4. Establish an agent control plane
Think of this as the central nervous system for governing AI agents across your organisation. It doesn't have to be a single product, it's more of a capability. The control plane should give you:
- A registry of all agents: What they are, where they run, who owns them, what they're authorised to do
- Policy enforcement: The ability to set and enforce rules across all agents consistently
- Kill switches: The ability to immediately suspend any agent if something goes wrong
- Audit trails: A complete record of agent actions for compliance and investigation
If this sounds a lot like what we built for cloud governance, you're right. The patterns are remarkably similar. The difference is the speed at which things can go wrong.
5. Define accountability before you need it
When (not if) an AI agent causes an incident, you don't want to be figuring out accountability on the fly. Define it now:
- The deploying team is responsible for the agent's configuration and scope
- The model provider is responsible for the model's behaviour within its stated capabilities
- Security is responsible for the governance framework and monitoring
- Executive leadership is responsible for the risk appetite decisions around AI agent deployment
Document this. Get sign-off. Because when something goes sideways at 2am, you don't want to be having a philosophical debate about who owns the problem.
The uncomfortable truth about speed
Here's what makes this genuinely difficult. The whole point of AI agents is speed and autonomy. Every governance control you add creates friction. And the business teams deploying agents are doing so precisely because they want to move faster.
This is the same tension we've always navigated in security. Enabling the business while managing risk. But the stakes feel higher because the technology is more capable and less predictable than what we've dealt with before.
My advice? Don't try to boil the ocean. Start with a simple agent registry. Get visibility. Then layer on controls based on risk. The organisation deploying agents to summarise meeting notes needs a different governance approach than the one deploying agents to approve financial transactions.
Questions to ask yourself
If you take nothing else from this article, go back to your organisation and ask these questions:
- How many AI agents are currently operating in our environment?
- Who owns each one?
- What access does each one have, and is it the minimum required?
- What happens if one goes rogue at 3am on a Saturday?
- Who is accountable when an agent makes a bad decision?
If you can answer all five confidently, you're ahead of most. If you can't, you've got some work to do.
The agents are already here. The question is whether you're governing them, or they're governing you.