MSP AI Resource Hub GOVERNANCE SERIES
POLICY DOC AUP · v1.0 · 2026
GOVERNANCE SERIES — VOL. 5

AI Acceptable Use Policy

Defines permitted and prohibited use of AI tools for all staff. Covers approved use cases, data handling rules, vendor approval requirements, human review obligations, and the escalation path for questions and incidents.

SOC 2 CC1.1 SOC 2 CC1.4 SOC 2 CC2.1 ACKNOWLEDGEMENT REQUIRED ANNUAL REVIEW ALL STAFF
DOCUMENT OWNER
Designated Risk Owner
EFFECTIVE DATE
2026-01-01
REVIEW CYCLE
Annual · or when new tools added
VERSION
1.0
APPLIES TO
All staff · All AI tools · All environments
01 Purpose

AI tools can make our work faster and better. They can also expose client data, introduce errors into client-facing work, or create compliance problems if used without boundaries. This policy exists to protect clients, protect the organization, and give staff a clear set of rules they can actually follow.

This is not a prohibition on AI use. It is a framework for using AI responsibly. Staff are encouraged to use approved AI tools as part of their daily work. The goal is to make sure the right guardrails are in place so that AI is an asset, not a liability.

CORE PRINCIPLE AI tools are powerful assistants. They do not replace professional judgment, client review obligations, or compliance requirements. Your name on the work means you own the output — regardless of how it was generated.
02 Scope

This policy applies to all staff, contractors, and third parties who use AI tools in the course of work for or on behalf of the organization. It covers all AI tools — whether provided by the organization, accessed independently, or used on personal devices for work purposes.

THIS POLICY COVERSEXAMPLES
Approved organization toolsMicrosoft Copilot, ChatGPT Enterprise, approved AI assistants in RMM/PSA platforms
Public AI tools used for workChatGPT.com, Claude.ai, Gemini, Perplexity, or any AI chatbot accessed in a browser
AI features in existing toolsAI summarization in ConnectWise, AI-assisted writing in Outlook, AI features in Huntress
Locally installed AI modelsOllama, LM Studio, llama.cpp, Whisper, or any locally run AI model
AI used on personal devices for work tasksUsing a personal phone or laptop to process work-related content through any AI tool
03 Approved Uses

The following uses of approved AI tools are permitted. All approved uses assume the data handling rules in Section 05 are followed — if client PII, credentials, or confidential data are involved, the restrictions in that section apply regardless of the use case.

✓ PERMITTED — GENERAL TASKS
Drafting and proofreading internal documentation, SOPs, runbooks, and training materials using anonymized or generic examples
Writing and debugging scripts and code where no credentials, internal IPs, or client hostnames are included in the prompt
Summarizing publicly available technical documentation, vendor release notes, or CVE advisories
Explaining technical concepts, error codes, or event IDs using generic examples
Drafting templates for client communications using placeholder data, reviewed and customized before sending
Brainstorming, ideation, and outline creation for internal projects
Generating checklists, meeting agendas, and project frameworks using no client-specific data
✓ PERMITTED — TECHNICAL WORK
Asking AI to explain what a PowerShell cmdlet or API response means — without pasting live client data alongside it
Ticket summarization using sanitized notes where all client names, IPs, and usernames have been replaced with generic placeholders
Using the organization's PII Sanitizer tool to pre-process text before pasting into any AI tool
Using Microsoft Copilot within the licensed Microsoft 365 tenant where data governance controls are confirmed active
Generating runbook step drafts using generic system descriptions, reviewed by a senior tech before publishing
Using AI to research attack patterns, threat intelligence, or CVE details from public sources
04 Prohibited Uses

The following uses are prohibited regardless of which AI tool is used. A single prohibited act — even if accidental — must be reported per Section 09. These rules exist to protect clients, protect the organization, and maintain compliance with contractual and regulatory obligations.

✗ NEVER DO — DATA EXPOSURE
Paste any password, API key, SSH key, auth token, or connection string into any AI tool — including for debugging help
Paste client PII — names, email addresses, phone numbers, SSNs, dates of birth, or any identifying information — into a public AI tool
Submit raw client firewall logs, SIEM events, or network captures containing live internal IPs or hostnames
Paste full email threads containing client names, email addresses, or confidential business context
Submit HIPAA-covered, ITAR-controlled, PCI-scoped, or attorney-client-privileged data to any AI tool
Submit internal financial data — margins, contract values, pricing structures, or budget figures
✗ NEVER DO — PROCESS VIOLATIONS
Use an AI tool that has not been reviewed and approved through the organization's AI Tool Approval Process (Section 07)
Send AI-generated output to a client without reading and verifying it first — AI can hallucinate confidently
Use AI to make autonomous decisions that affect client environments, billing, or security posture without human review and approval
Install or run a local AI model (Ollama, LM Studio, llama.cpp, Whisper) on any device — personal or organization-owned — without IT approval
Use AI to generate content intended to deceive, mislead, or impersonate — including deepfakes, fake communications, or fraudulent documentation
Attempt to circumvent data loss prevention, sensitivity labels, or security controls using AI tooling
05 Data Handling Rules

Before putting anything into an AI tool, apply this test: would you be comfortable if this text appeared in a public data breach? If the answer is no — anonymize it first, or describe the situation without pasting the raw data.

The following data types are off-limits for public AI tools under any circumstances:

🔑
CREDENTIALS
Passwords · API keys · SSH keys · Auth tokens · Connection strings · Service accounts
👤
CLIENT PII
Full names · Email addresses · Phone numbers · SSN / DOB · Home addresses · Account numbers
🏢
CLIENT BUSINESS DATA
Financial figures · NDA-covered info · Internal processes · Proprietary system details · Employee data
🌐
INFRASTRUCTURE DATA
Internal IPs · Hostnames · VLAN configs · Firewall rules · Network diagrams · Client topology
⚕️
REGULATED DATA
HIPAA patient records · ITAR-controlled info · PCI card data · Legal case files · Attorney-client communications
💰
INTERNAL FINANCIALS
Margins · Contract values · Internal pricing · Budget data · Revenue figures · Vendor terms
SAFE ALTERNATIVE — ALWAYS WORKS Replace real values before pasting. Use "Client A" instead of client names. Use "192.168.x.x" instead of real IPs. Use "the end user" instead of real names. Use "User@domain.com" instead of real addresses. The AI gets just as useful a result — with none of the risk.
TOOL AVAILABLE Use the PII Sanitizer tool (available in the AI Resource Hub) to automatically detect and redact names, email addresses, IP addresses, and other sensitive patterns before pasting text into any AI tool. Use it as your first step, not your last resort.
06 Human Review Requirements

AI output is a first draft, not a finished product. All AI-generated content that leaves the organization or affects a client environment requires human review before it is used. This is not optional — it is a control requirement mapped to SOC 2 CC2.2 and CC2.3.

USE CASEREVIEW REQUIREDWHO REVIEWSBEFORE WHAT
Client-facing email or communication REQUIRED Author + account owner or manager Sending to client
Security report or assessment output REQUIRED Senior tech or security lead Delivery to client
Runbook or SOP published internally REQUIRED Team lead or operations owner Publishing to knowledge base
AI-generated code deployed to production REQUIRED Senior tech — same review standard as human-written code Deployment or execution
AI-suggested remediation action on client environment REQUIRED — HARD GATE Assigning tech + team lead approval Any action is taken
Internal draft or notes (not leaving the org) RECOMMENDED Author Using for decisions
Ticket summary for internal log only RECOMMENDED Author Saving to record
IMPORTANT — AI DOES NOT VERIFY FACTS AI tools will state incorrect information confidently. For anything involving specific CVE details, vendor specifications, regulatory requirements, client account details, or technical specs — verify from an authoritative source before using the output. Never forward AI output to a client without checking the facts it contains.
07 AI Tool Approval Process

No AI tool may be used for work purposes — even for personal productivity tasks that involve work context — until it has completed the organization's AI Tool Approval Process. The approved tools list is maintained by IT / Security and available in the AI Resource Hub.

1
SUBMIT A REQUEST
Any staff member can request review of a new AI tool. Submit via the AI Change Request form in the Resource Hub. Include the tool name, vendor, URL, and intended use case. Do not start using the tool while the review is pending.
2
VENDOR RISK ASSESSMENT
IT / Security completes the AI Vendor Assessment Checklist (Vol. 3). The assessment covers SOC 2 status, DPA availability, training opt-out controls, data retention, encryption, MFA/SSO support, and geographic data residency. A single blocking failure on a required item stops approval.
3
DATA PROCESSING AGREEMENT
A signed DPA must be executed with the vendor before any data enters the tool. No exceptions. The DPA must define data handling, retention, and deletion rights. A copy is filed in the vendor records.
4
CAB REVIEW AND APPROVAL
The Change Advisory Board reviews the assessment results and makes the final approval decision. The CAB can approve, approve with conditions (such as restricting use to non-client data), or deny. Approved tools are added to the approved tools registry with the approval date and any conditions.
5
DEPLOYMENT AND DOCUMENTATION
IT completes the AI Tool Deployment Checklist (Vol. 1). This includes SSO configuration, MFA enforcement, user provisioning, documentation, and staff briefing. The tool is added to the AI Controls Mapping and the approved tools registry before any staff use begins.
HARD RULE Using an unapproved AI tool for any work purpose is a policy violation regardless of intent. If you discover a tool that would be useful, submit a request. Do not use it while you wait for approval.
08 Shadow AI and Unauthorized Local Models

Shadow AI — unauthorized AI tools running inside the organization or client environments — represents a significant and growing risk. The organization uses SentinelOne Deep Visibility to detect unauthorized local AI models (including Ollama, LM Studio, llama.cpp, and Whisper), rogue GPU usage, and unapproved AI tooling across managed endpoints.

ACTIVITYSTATUSREQUIRED ACTION
Using ChatGPT.com for personal tasks unrelated to work PERSONAL RISK Outside this policy's scope on personal time — but never mix work data in
Using Claude.ai or Gemini for work tasks with sanitized data REQUIRES APPROVAL Submit for review. Do not use until approved.
Installing Ollama or LM Studio on a work laptop PROHIBITED Report immediately via Section 09. Do not continue use.
Discovering a coworker using an unapproved AI tool Report via the AI Incident Report form. Not punitive — IT will assess the tool.
Client environment flagged for local AI model usage Escalate to Security Lead. Follow Shadow AI incident procedure.
09 Incident Reporting

Report any of the following situations immediately. Early reporting limits damage. Delayed reporting makes it worse — and may create additional liability. Reports go to IT / Security and the Risk Owner. For data exposure events, do not notify the client until IT / Security has completed an initial assessment.

WHAT HAPPENEDREPORT WITHINHOW TO REPORT
Sensitive data pasted into a public AI tool 15 MINUTES Call or message IT / Security directly. Then file AI incident report.
AI-generated incorrect output sent to client 30 MINUTES Notify account owner. File incident report. Do not send a correction until reviewed.
Unapproved AI tool discovered in use 4 HOURS File AI incident report. IT will run vendor assessment.
Suspected AI-assisted phishing targeting staff or clients IMMEDIATELY Do not engage. Call Security Lead. Preserve original message.
Credentials accidentally included in an AI prompt 15 MINUTES Rotate the credential first. Then notify IT / Security.
FULL PROCEDURES Detailed step-by-step incident response procedures for each scenario are in the AI Incident Response Playbook (Governance Series Vol. 2), available in the AI Resource Hub. Reference it during any active incident.
10 Policy Violations

Violations of this policy are handled through the organization's standard disciplinary process. The severity of the response is proportional to the nature and impact of the violation. Accidental violations that are promptly reported are treated differently from deliberate or repeated violations.

VIOLATION TYPETYPICAL RESPONSE
First-time accidental violation, promptly reported, no client impact Coaching, policy re-review, incident documented
Accidental violation with client data exposure — reported promptly Formal incident review, remediation steps, retraining required
Using an unapproved tool with full knowledge it was not approved Formal warning, mandatory retraining, access review
Repeated violations after prior coaching Escalation to HR, potential disciplinary action
Deliberate circumvention of security controls using AI Immediate suspension of AI tool access, HR escalation, potential termination
Concealing a known violation or data exposure Same as deliberate circumvention — failure to report is treated as a separate violation
11 Escalation Path

If you are unsure whether a specific AI use case is permitted under this policy, escalate before proceeding. It is always better to ask first.

1
GENERAL QUESTIONS
Check the AI Resource Hub first — the Data Safety Guide, Prompt Writing Guide, and approved tools list answer most common questions. If your question is not answered there, ask your team lead.
2
TOOL APPROVAL QUESTIONS
Contact IT / Security. They own the approval process and maintain the approved tools registry. Submit a formal request via the AI Change Request form if you want a new tool reviewed.
3
DATA HANDLING EDGE CASES
Contact the Risk Owner. Edge cases involving regulated data (HIPAA, PCI, ITAR), client contractual requirements, or data residency concerns go to the Risk Owner, who may involve legal counsel.
4
ACTIVE INCIDENTS
Contact IT / Security and the Security Lead directly — do not use the AI tool to draft or send the notification. Reference the AI Incident Response Playbook for step-by-step procedures.
12 Acknowledgement

All staff must read and acknowledge this policy before using any AI tool for work purposes. Acknowledgement is required annually and whenever a material update is made to this document. Your acknowledgement is logged and constitutes auditable evidence for SOC 2 CC1.1 and CC1.4.

REQUIRED — ANNUAL ACKNOWLEDGEMENT
AI Acceptable Use Policy — Staff Acknowledgement
By completing this acknowledgement, you confirm that you have read and understood the AI Acceptable Use Policy (Version 1.0, effective 2026), and that you agree to comply with its requirements in all AI-related activities performed in the course of your work.
✓ ACKNOWLEDGEMENT RECORDED — Thank you. Your acknowledgement has been logged. If this page is connected to SharePoint, the record has been saved to the governance list. If not, use the Print button to generate a signed copy for your records.