Maps each active AI use case to SOC 2 trust service criteria, the control in place, current implementation status, and auditable evidence. Pre-built for your audit trail.
| AI Use Case | Category | SOC 2 Criteria | Control in Place | Status | Auditable Evidence |
|---|---|---|---|---|---|
AI Tool Approval Process Review and approval of all AI tools before staff use |
Governance | AI Vendor Assessment Checklist required before approval Approved tools list maintained by IT / Security Unapproved tools blocked via DNS/proxy |
Partial | Completed vendor assessment forms Approved tools registry with dates DNS block log or proxy policy config |
|
AI Use with Client Data Any workflow where client or PII data may be used in an AI prompt |
Data | AI Data Safety policy (What NOT to Put Into AI) PII Sanitizer tool available for pre-processing Annual awareness training includes AI data rules |
Partial | Published AI Data Safety guide (Vol.2) Training completion records PII Sanitizer tool availability log |
|
AI-Generated Client Communications Use of AI to draft emails, reports, or ticket responses sent to clients |
Operations | Human review required before sending AI-generated content externally Prompt Writing Guide training for all staff |
Partial | Written review policy / SOP Prompt Writing Guide (Vol.1) distribution records |
|
AI Automation in RMM / Ticketing Automated AI-driven workflows that act on client environments without manual triggering |
Operations | Human-in-the-loop required for any action with client impact Exception and error logging enabled on all AI automation Change management process applies to AI workflow changes |
Gap | Automation exception logs Change request records for AI workflow updates Human approval records for automated actions |
|
AI Risk Assessment Program Formal identification, scoring, and tracking of AI-specific risks |
Governance | AI Risk Register maintained and reviewed quarterly Risks assigned owners with documented mitigations |
Partial | AI Risk Register (Vol.1) with review dates Quarterly review meeting minutes Risk owner acknowledgement records |
|
AI Incident Response Detection, containment, and remediation of AI-related security or data incidents |
Security | AI Incident Response Playbook covering 4 key scenarios Incident documentation requirements defined Post-incident review process in place |
Partial | Published IR Playbook (Vol.2) Incident log records with closure sign-off Post-incident review notes |
|
AI Acceptable Use Policy Formal policy governing permitted and prohibited use of AI by all staff |
Governance | AI AUP drafted and distributed to all staff Annual acknowledgement required Updated as new tools or risks emerge |
Gap | Signed/acknowledged AUP records per employee AUP version history and approval dates AUP distribution logs |
|
AI-Assisted Code Development Use of AI to write, suggest, or review scripts, automations, or application code |
Security | AI-generated code subject to same review process as human-written code No credentials or internal IPs to be included in prompts SAST scanning recommended before deployment |
Gap | Code review records noting AI-generated sections SAST scan results pre-deployment Deployment change logs |
|
AI Vendor Data Processing Data sent to third-party AI vendors as part of approved tool use |
Data | DPA executed with all approved AI vendors Vendor assessment on file for each approved tool Data types permitted per vendor documented |
Gap | Executed DPA documents per vendor Approved vendor list with permitted data types Vendor assessment forms on file |