What AI governance is, why it matters to the business, what the CAB process protects against, and what your specific role is when you sign a change request. No technical background required.
01What Is AI Governance -- In Plain English
The Simple Version
AI governance is the set of rules, processes, and checks that make sure AI is used safely and responsibly in the organisation. It answers three questions: Who is allowed to use AI and for what?How do we make sure it doesn't cause harm?What do we do when something goes wrong?
Think of it this way
It's like a building permit process. Anyone can build a structure -- but before it goes up, someone qualified reviews the plans, checks for risks, and signs off. The permit doesn't slow down good projects. It stops bad ones before they become expensive problems. AI governance is the permit process for putting AI into your business.
Why It Matters Right Now
AI tools are becoming easier to adopt -- which means staff are adopting them whether a formal process exists or not. Without governance, the organisation doesn't know what data is being fed into AI tools, what decisions AI is influencing, or what the liability exposure is if something goes wrong. Governance closes that gap before an auditor, a regulator, or a client does.
02What the CAB Process Actually Does
CAB = Change Advisory Board
Every time someone wants to introduce, change, or remove an AI tool or AI-driven workflow, they submit a formal change request. The CAB -- which includes you -- reviews it before anything goes live. The purpose is simple: make sure every AI change has been thought through, tested, and approved by the right people before it affects clients or operations.
Think of it this way
It's like an investment committee. Not every idea needs a full board meeting -- routine changes are pre-approved. But anything novel, risky, or with meaningful business impact gets a proper review before the money is spent. The CAB is that committee for AI changes.
03The Four CAB Roles
Every change request requires sign-off from four perspectives. Each role is asking a different question.
IT / Security
Is this technically sound? Are credentials safe? Does it create security vulnerabilities? Is the vendor trustworthy?
Operations
Will this disrupt daily workflows? Is there a rollback plan? Has it been tested? Can the team support it?
Risk
What could go wrong? Is the risk rating accurate? Are the controls adequate? Does this affect our compliance posture?
Business / Management
Does this make business sense? Are we protected if something goes wrong? Is the timing right? Does this align with our commitments to clients?
Your role
✓
You are not expected to evaluate the technical details. The IT, Operations, and Risk representatives handle that. Your job is the business lens -- does this change make sense from a client, commercial, and organisational standpoint?
04What the Process Is Protecting Against
Scenario
A staff member pastes confidential client data into a public AI tool to write a report faster.
AI Data Safety policy. Required training. Vendor approval process.
Scenario
An AI-generated client report contains incorrect technical advice. The client acts on it and incurs cost.
Business Impact
Liability exposure. Client relationship at risk. Possible professional indemnity claim.
How We Prevent It
Human review policy before any AI output goes to a client. IR playbook for when it happens anyway.
Scenario
A team adopts an AI tool that turns out to have no data processing agreement and retains all inputs indefinitely.
Business Impact
GDPR / compliance exposure. Inability to honour client data deletion requests. SOC 2 audit finding.
How We Prevent It
Vendor assessment required before any tool is approved. CAB sign-off gates deployment.
Scenario
An AI automation deployed without oversight starts making incorrect decisions across multiple client environments before anyone notices.
Business Impact
Wide client impact. Emergency rollback costs. Service credit exposure.
How We Prevent It
CAB requires rollback plan and human oversight controls before approving autonomous AI actions.
05Your Role in Practice
1
Receive the Change Request
You'll receive an email with the AI Change Request form attached. The email summary includes the change title, classification, risk rating, and what's changing. You don't need to read the entire technical form -- the summary gives you what you need.
2
Attend or Review the CAB Meeting
Normal changes have a scheduled CAB review. Your job is to ask the business questions -- see the companion guide for the specific questions to ask. You don't need to evaluate the technical answers, just satisfy yourself that business risk is understood and managed.
3
Sign or Raise a Concern
If you're satisfied, sign the decision block. If something doesn't feel right -- raise it. You don't need to articulate the technical problem. "I'm not comfortable with the client exposure here" or "I'd like to see this scoped back before we proceed" are completely valid positions. The CAB exists precisely so these concerns get raised before go-live, not after.
4
Keep Your Copy
The signed change request is an auditable record. It shows that the change was reviewed by qualified people across all four business functions. Your signature is evidence that the business applied due diligence -- this is what protects the organisation if something goes wrong later.
06What Good Looks Like
Signs a change is well-prepared
Clear title and description. Honest risk rating with justification. Specific rollback plan. Vendor assessment attached. Testing evidence provided. Staff briefing planned.
Signs a change needs more work
Vague description. Risk rated Low with no justification. "No rollback needed" with no explanation. No vendor assessment. Going straight to production with no testing.
⚠
When in doubt, defer. A deferred change costs a few days. An approved change that goes wrong can cost significantly more -- in remediation, client relations, and compliance standing. The CAB process is designed to make deferral the path of least resistance when something isn't ready.