Risk Management Series — Vol. 4

AI Controls Mapping

Maps each active AI use case to SOC 2 trust service criteria, the control in place, current implementation status, and auditable evidence. Pre-built for your audit trail.

💡
How to use this: Each row is an AI use case active in the MSP environment. For each, the table shows which SOC 2 criteria it triggers, what control addresses it, its current implementation status ( Implemented, Partial, Gap), and what evidence you would show an auditor. Update status as controls mature.
01Use Case Controls Matrix
Filter by category:
AI Use Case Category SOC 2 Criteria Control in Place Status Auditable Evidence
AI Tool Approval Process
Review and approval of all AI tools before staff use
Governance
CC1.2CC8.1CC9.2
AI Vendor Assessment Checklist required before approval
Approved tools list maintained by IT / Security
Unapproved tools blocked via DNS/proxy
Partial
Completed vendor assessment forms
Approved tools registry with dates
DNS block log or proxy policy config
AI Use with Client Data
Any workflow where client or PII data may be used in an AI prompt
Data
CC6.1CC6.7CC2.3
AI Data Safety policy (What NOT to Put Into AI)
PII Sanitizer tool available for pre-processing
Annual awareness training includes AI data rules
Partial
Published AI Data Safety guide (Vol.2)
Training completion records
PII Sanitizer tool availability log
AI-Generated Client Communications
Use of AI to draft emails, reports, or ticket responses sent to clients
Operations
CC2.2CC2.3
Human review required before sending AI-generated content externally
Prompt Writing Guide training for all staff
Partial
Written review policy / SOP
Prompt Writing Guide (Vol.1) distribution records
AI Automation in RMM / Ticketing
Automated AI-driven workflows that act on client environments without manual triggering
Operations
CC7.2CC7.4CC8.1
Human-in-the-loop required for any action with client impact
Exception and error logging enabled on all AI automation
Change management process applies to AI workflow changes
Gap
Automation exception logs
Change request records for AI workflow updates
Human approval records for automated actions
AI Risk Assessment Program
Formal identification, scoring, and tracking of AI-specific risks
Governance
CC3.1CC3.2CC9.1
AI Risk Register maintained and reviewed quarterly
Risks assigned owners with documented mitigations
Partial
AI Risk Register (Vol.1) with review dates
Quarterly review meeting minutes
Risk owner acknowledgement records
AI Incident Response
Detection, containment, and remediation of AI-related security or data incidents
Security
CC7.3CC7.4CC4.2
AI Incident Response Playbook covering 4 key scenarios
Incident documentation requirements defined
Post-incident review process in place
Partial
Published IR Playbook (Vol.2)
Incident log records with closure sign-off
Post-incident review notes
AI Acceptable Use Policy
Formal policy governing permitted and prohibited use of AI by all staff
Governance
CC1.1CC1.4CC2.1
AI AUP drafted and distributed to all staff
Annual acknowledgement required
Updated as new tools or risks emerge
Gap
Signed/acknowledged AUP records per employee
AUP version history and approval dates
AUP distribution logs
AI-Assisted Code Development
Use of AI to write, suggest, or review scripts, automations, or application code
Security
CC8.1CC6.7
AI-generated code subject to same review process as human-written code
No credentials or internal IPs to be included in prompts
SAST scanning recommended before deployment
Gap
Code review records noting AI-generated sections
SAST scan results pre-deployment
Deployment change logs
AI Vendor Data Processing
Data sent to third-party AI vendors as part of approved tool use
Data
CC6.7CC9.2A1.2
DPA executed with all approved AI vendors
Vendor assessment on file for each approved tool
Data types permitted per vendor documented
Gap
Executed DPA documents per vendor
Approved vendor list with permitted data types
Vendor assessment forms on file
02Implementation Status Summary
2
Partial Controls
4
Partial Controls
3
Control Gaps
9
Total Use Cases
🛡
Priority gaps to close: AI Acceptable Use Policy (CC1.1/CC1.4), AI Automation oversight controls (CC7.2), and executed DPAs with AI vendors (CC6.7/CC9.2) represent the highest-priority gaps for SOC 2 readiness. Address these before the next audit cycle.