Defines permitted and prohibited use of AI tools for all staff. Covers approved use cases, data handling rules, vendor approval requirements, human review obligations, and the escalation path for questions and incidents.
AI tools can make our work faster and better. They can also expose client data, introduce errors into client-facing work, or create compliance problems if used without boundaries. This policy exists to protect clients, protect the organization, and give staff a clear set of rules they can actually follow.
This is not a prohibition on AI use. It is a framework for using AI responsibly. Staff are encouraged to use approved AI tools as part of their daily work. The goal is to make sure the right guardrails are in place so that AI is an asset, not a liability.
This policy applies to all staff, contractors, and third parties who use AI tools in the course of work for or on behalf of the organization. It covers all AI tools — whether provided by the organization, accessed independently, or used on personal devices for work purposes.
| THIS POLICY COVERS | EXAMPLES |
|---|---|
| Approved organization tools | Microsoft Copilot, ChatGPT Enterprise, approved AI assistants in RMM/PSA platforms |
| Public AI tools used for work | ChatGPT.com, Claude.ai, Gemini, Perplexity, or any AI chatbot accessed in a browser |
| AI features in existing tools | AI summarization in ConnectWise, AI-assisted writing in Outlook, AI features in Huntress |
| Locally installed AI models | Ollama, LM Studio, llama.cpp, Whisper, or any locally run AI model |
| AI used on personal devices for work tasks | Using a personal phone or laptop to process work-related content through any AI tool |
The following uses of approved AI tools are permitted. All approved uses assume the data handling rules in Section 05 are followed — if client PII, credentials, or confidential data are involved, the restrictions in that section apply regardless of the use case.
The following uses are prohibited regardless of which AI tool is used. A single prohibited act — even if accidental — must be reported per Section 09. These rules exist to protect clients, protect the organization, and maintain compliance with contractual and regulatory obligations.
Before putting anything into an AI tool, apply this test: would you be comfortable if this text appeared in a public data breach? If the answer is no — anonymize it first, or describe the situation without pasting the raw data.
The following data types are off-limits for public AI tools under any circumstances:
AI output is a first draft, not a finished product. All AI-generated content that leaves the organization or affects a client environment requires human review before it is used. This is not optional — it is a control requirement mapped to SOC 2 CC2.2 and CC2.3.
| USE CASE | REVIEW REQUIRED | WHO REVIEWS | BEFORE WHAT |
|---|---|---|---|
| Client-facing email or communication | REQUIRED | Author + account owner or manager | Sending to client |
| Security report or assessment output | REQUIRED | Senior tech or security lead | Delivery to client |
| Runbook or SOP published internally | REQUIRED | Team lead or operations owner | Publishing to knowledge base |
| AI-generated code deployed to production | REQUIRED | Senior tech — same review standard as human-written code | Deployment or execution |
| AI-suggested remediation action on client environment | REQUIRED — HARD GATE | Assigning tech + team lead approval | Any action is taken |
| Internal draft or notes (not leaving the org) | RECOMMENDED | Author | Using for decisions |
| Ticket summary for internal log only | RECOMMENDED | Author | Saving to record |
No AI tool may be used for work purposes — even for personal productivity tasks that involve work context — until it has completed the organization's AI Tool Approval Process. The approved tools list is maintained by IT / Security and available in the AI Resource Hub.
Shadow AI — unauthorized AI tools running inside the organization or client environments — represents a significant and growing risk. The organization uses SentinelOne Deep Visibility to detect unauthorized local AI models (including Ollama, LM Studio, llama.cpp, and Whisper), rogue GPU usage, and unapproved AI tooling across managed endpoints.
| ACTIVITY | STATUS | REQUIRED ACTION |
|---|---|---|
| Using ChatGPT.com for personal tasks unrelated to work | PERSONAL RISK | Outside this policy's scope on personal time — but never mix work data in |
| Using Claude.ai or Gemini for work tasks with sanitized data | REQUIRES APPROVAL | Submit for review. Do not use until approved. |
| Installing Ollama or LM Studio on a work laptop | PROHIBITED | Report immediately via Section 09. Do not continue use. |
| Discovering a coworker using an unapproved AI tool | — | Report via the AI Incident Report form. Not punitive — IT will assess the tool. |
| Client environment flagged for local AI model usage | — | Escalate to Security Lead. Follow Shadow AI incident procedure. |
Report any of the following situations immediately. Early reporting limits damage. Delayed reporting makes it worse — and may create additional liability. Reports go to IT / Security and the Risk Owner. For data exposure events, do not notify the client until IT / Security has completed an initial assessment.
| WHAT HAPPENED | REPORT WITHIN | HOW TO REPORT |
|---|---|---|
| Sensitive data pasted into a public AI tool | 15 MINUTES | Call or message IT / Security directly. Then file AI incident report. |
| AI-generated incorrect output sent to client | 30 MINUTES | Notify account owner. File incident report. Do not send a correction until reviewed. |
| Unapproved AI tool discovered in use | 4 HOURS | File AI incident report. IT will run vendor assessment. |
| Suspected AI-assisted phishing targeting staff or clients | IMMEDIATELY | Do not engage. Call Security Lead. Preserve original message. |
| Credentials accidentally included in an AI prompt | 15 MINUTES | Rotate the credential first. Then notify IT / Security. |
Violations of this policy are handled through the organization's standard disciplinary process. The severity of the response is proportional to the nature and impact of the violation. Accidental violations that are promptly reported are treated differently from deliberate or repeated violations.
| VIOLATION TYPE | TYPICAL RESPONSE |
|---|---|
| First-time accidental violation, promptly reported, no client impact | Coaching, policy re-review, incident documented |
| Accidental violation with client data exposure — reported promptly | Formal incident review, remediation steps, retraining required |
| Using an unapproved tool with full knowledge it was not approved | Formal warning, mandatory retraining, access review |
| Repeated violations after prior coaching | Escalation to HR, potential disciplinary action |
| Deliberate circumvention of security controls using AI | Immediate suspension of AI tool access, HR escalation, potential termination |
| Concealing a known violation or data exposure | Same as deliberate circumvention — failure to report is treated as a separate violation |
If you are unsure whether a specific AI use case is permitted under this policy, escalate before proceeding. It is always better to ask first.
All staff must read and acknowledge this policy before using any AI tool for work purposes. Acknowledgement is required annually and whenever a material update is made to this document. Your acknowledgement is logged and constitutes auditable evidence for SOC 2 CC1.1 and CC1.4.