01
What This Dashboard Is For
Every MSP team is using AI tools. Some of those tools are sanctioned, assessed, and covered by a data processing agreement. Many are not. A tech pastes ticket notes into ChatGPT. Someone on the help desk starts using Notion AI on their personal account. A vendor adds an AI summarization feature to their product and nobody notices.
This dashboard is your central register for every AI tool and third-party AI integration that touches your environment or your clients' data. It tracks who has what access, what data they see, when you last assessed them, and whether the assessment is still current.
It also gives you a credible artifact. When a client's compliance committee asks "what AI vendors do you use and what controls are in place?" you can pull this up or export the CSV and answer the question directly instead of scrambling.
⚠
Shadow AI is the primary risk. The biggest supply-chain exposure in most MSPs right now isn't the tools you approved. It's the ones your team started using without asking. This register is how you find and remediate those before they become an incident.
📊
Vendor Inventory
Every AI tool in one place. Name, category, access level, data shared, and current assessment status visible at a glance.
⚠
Assessment Alerts
Automatic alerts for expired assessments and those expiring within 30 days. No more relying on calendar reminders that don't fire.
📊
Portfolio Risk Score
Weighted average across all vendors gives you a single number to track over time and present to leadership or clients.
📄
CSV Export
One-click export for evidence packages, client reports, and compliance questionnaire responses. Timestamped filename included.
⚙
Activity Log
Every add, removal, and review is timestamped in the log. Gives you a lightweight audit trail without standing up a separate system.
🔎
Filter & Search
Instantly filter by risk level or sanction status. Find all shadow AI tools or all critical-risk vendors in two clicks.
03
Risk Levels & What They Mean
Risk levels are your categorical assessment of the overall exposure this vendor represents. They drive alert colors, stat cards, and the portfolio score. Use these definitions consistently so the register stays meaningful over time.
Low
1 — 34
Minimal exposure. Limited data access, strong contractual controls, no PII or sensitive data in scope. Vendor has established security program.
Review: Annually. No escalation needed.
Medium
35 — 54
Moderate exposure. Some business or client data in scope. DPA in place but gaps may exist. Monitoring and compensating controls recommended.
Review: Every 6 months. Note controls in record.
High
55 — 74
Significant exposure. PII, client data, or sensitive business data in scope. Requires formal DPA, usage policy, and documented compensating controls.
Review: Quarterly. Escalate to security lead.
Critical
75 — 100
Severe exposure. Broad access to regulated data, client conversations, credentials, or source code. Immediate formal assessment and executive sign-off required.
Review: Immediately. Leadership notification required.
ⓘ
Scoring guidance: Start with the data shared. PII or regulated data adds 20+ points immediately. Broad access (Full / Read-Write) adds 15+ points. No DPA or data processing agreement adds 20+ points. Shadow / unsanctioned status adds 15+ points regardless of other factors.
06
Compliance Framework Alignment
The dashboard footer lists four frameworks this register directly supports. Here is what each one requires and how the register addresses it.
NIST AI RMF
AI Risk Management Framework
The NIST AI RMF Govern, Map, Measure, and Manage functions all require documented identification of AI systems in use, their risk characteristics, and ongoing monitoring. This register directly satisfies the MAP function (inventory of AI) and MEASURE (risk scores and assessment cadence).
GOVERN 1.1 / MAP 1.1 / MEASURE 2.5
CMMC 2.0
Cybersecurity Maturity Model Certification
CMMC Level 2 requires supply chain risk management under CA.L2 and system/service acquisition controls under SA.L2. Any AI vendor processing CUI or operating within the CUI boundary must be assessed. This register is your evidence that external AI tools have been identified and evaluated.
CA.L2-3.12.1 / SA.L2-3.14.6
SOC 2
Trust Services Criteria CC9.2
CC9.2 requires that organizations assess and monitor vendor and partner risk. Auditors will ask for evidence of a vendor risk management process. This register, combined with exported CSV files and assessment notes, provides direct evidence of a functioning vendor risk program that covers AI tools specifically.
CC9.2 — Vendor Risk Management
ISO 42001
AI Management System Standard
ISO 42001 is the first international standard specifically for AI management systems. Clause 8 requires organizations to control AI-related processes and manage external AI providers. This register directly supports Clause 8.4 (AI system supply chain) and Clause 9 (performance evaluation).
Clause 8.4 / Clause 9.1
✓
What this register alone does not do: It does not replace a formal vendor risk assessment questionnaire, a data processing agreement review, or a penetration test. It is the tracking and evidence layer that sits on top of those activities. Think of it as the process wrapper that proves the work happened and is being maintained.
07
Common Questions
How often should I reassess vendors? ▶
The default cadence is 6 months, which the dashboard sets automatically when you mark a vendor reviewed. Critical and High risk vendors should be reviewed quarterly. Low risk vendors may be reviewed annually. The key is consistency. A register reviewed quarterly on a known schedule is far more defensible than one reviewed sporadically. If a vendor has a significant change, such as a new data sharing policy, a security incident, or a major product update, trigger an out-of-cycle review and note it in the record.
What counts as an AI vendor? ▶
Any tool that uses machine learning or AI to process data you or your clients own. This includes: LLMs you call via API (OpenAI, Anthropic, Google), copilot-style tools built into software you already use (Microsoft Copilot, Grammarly, GitHub Copilot), AI features added to existing platforms (Zendesk AI, HubSpot AI, ConnectWise AI), meeting transcription tools (Otter.ai, Fireflies), and any internal agents or automations you build on top of AI platforms. If it's making inferences about data and that data has any business sensitivity, it goes in the register.
Where is the data stored? Is it secure? ▶
Currently, data is stored in your browser's localStorage, meaning it persists on the machine where you entered it but does not sync across devices. This is intentional for simplicity in an initial deployment. For a team environment, the recommended path is to back the register with a SharePoint list. The dashboard structure is designed so the vendor array can be replaced with a SharePoint REST API call without changing the UI. For now, use the CSV export to create a backup any time significant changes are made.
What do I do when I find a shadow AI tool? ▶
Add it immediately with Shadow / Unsanctioned status. Set the risk score conservatively (assume the worst until assessed). Document what you know in the Notes field, including who was using it and what data may have been shared. Then begin the assessment process. If the tool is genuinely high risk, notify the relevant stakeholders and determine whether it should be blocked, approved with controls, or allowed on a provisional basis while the assessment runs. Do not delete it or pretend you didn't find it. The documented discovery and response is what demonstrates a functioning governance program.
Can I show this to clients? ▶
The dashboard is built for internal ops and committee review. You can show it to clients in a QBR or governance meeting context to demonstrate that you have a structured AI vendor management process. For a client-facing deliverable, the CSV export cleaned up in Excel or the print view is typically more appropriate than showing the live tool. If you want a polished client-facing version, a separate read-only view without the add/edit controls would be the right approach.
How does the portfolio risk score work? ▶
The portfolio risk score shown in the ring is a simple arithmetic mean of all vendor risk scores currently in the register. It does not apply weighting based on vendor importance or data volume. This keeps it transparent and easy to explain. A score trending upward month over month is a signal that either higher-risk vendors are being added or existing vendors are being reassessed with updated (higher) scores. Either way it is worth investigating.