Documentation & Field Guide
AI Vendor & Third-Party
Risk Dashboard
This page explains what the dashboard is, why it exists, how to use every feature, and how it maps to compliance frameworks your clients and committees will ask about. Read this once and you'll know exactly what to do with any vendor that shows up in your environment.
Supply Chain Risk AI Governance Compliance Evidence Shadow AI Detection
01 What This Dashboard Is For

Every MSP team is using AI tools. Some of those tools are sanctioned, assessed, and covered by a data processing agreement. Many are not. A tech pastes ticket notes into ChatGPT. Someone on the help desk starts using Notion AI on their personal account. A vendor adds an AI summarization feature to their product and nobody notices.

This dashboard is your central register for every AI tool and third-party AI integration that touches your environment or your clients' data. It tracks who has what access, what data they see, when you last assessed them, and whether the assessment is still current.

It also gives you a credible artifact. When a client's compliance committee asks "what AI vendors do you use and what controls are in place?" you can pull this up or export the CSV and answer the question directly instead of scrambling.

Shadow AI is the primary risk. The biggest supply-chain exposure in most MSPs right now isn't the tools you approved. It's the ones your team started using without asking. This register is how you find and remediate those before they become an incident.
📊
Vendor Inventory
Every AI tool in one place. Name, category, access level, data shared, and current assessment status visible at a glance.
Assessment Alerts
Automatic alerts for expired assessments and those expiring within 30 days. No more relying on calendar reminders that don't fire.
📊
Portfolio Risk Score
Weighted average across all vendors gives you a single number to track over time and present to leadership or clients.
📄
CSV Export
One-click export for evidence packages, client reports, and compliance questionnaire responses. Timestamped filename included.
Activity Log
Every add, removal, and review is timestamped in the log. Gives you a lightweight audit trail without standing up a separate system.
🔎
Filter & Search
Instantly filter by risk level or sanction status. Find all shadow AI tools or all critical-risk vendors in two clicks.
02 Register Fields Explained

Each vendor record captures the following fields. Required fields must be filled when adding a new entry. All others are strongly recommended for compliance purposes.

Field Type Required Description & Guidance
Vendor / Tool Name Text Yes Full product name including provider. Example: "OpenAI (ChatGPT / API)" not just "ChatGPT". Be specific enough that a new hire knows exactly what product this refers to.
Category Dropdown Yes Used for the sidebar breakdown and filtering. Choose the closest match: LLM / Foundation Model, AI Platform, Copilot / Assistant, Plugin / Integration, Custom Agent, Data / Analytics, Security AI.
Access Level Dropdown Yes What the vendor's tool can do to your data. API Only means you call their endpoint and control what you send. Read means it can pull data but not modify. Read/Write means it can change or create. Full Access means it has broad system-level permissions. When in doubt, choose the higher level.
Risk Score Number (1-100) Yes Your assessed risk score from your AI Vendor Assessment template. This drives the portfolio risk ring. If you haven't done a formal assessment yet, use 50 as a placeholder and flag the record for review.
Risk Level Dropdown Yes Your categorical rating: Low, Medium, High, or Critical. This should align with your risk score but is set independently so you can apply judgment. A score of 68 might be Medium or High depending on context.
Status Dropdown Yes Sanctioned: formally approved with DPA or equivalent in place. Under Review: assessment in progress or pending renewal. Shadow / Unsanctioned: detected in use without approval. Decommissioned: no longer in use, kept for record.
Last Assessed Date Rec. Date the last formal vendor assessment was completed. Used to calculate currency. Defaults to today when adding a new vendor.
Next Assessment Due Date Rec. Date when reassessment is required. Defaults to 6 months from today. The dashboard calculates days remaining and fires alerts when this is within 30 days or past due.
Data Shared Tags (comma-separated) Rec. What categories of data flow to this vendor. Be specific. Examples: Ticket Data, PII, Client Names, Source Code, Email Content, Meeting Audio, API Keys. This is the field auditors look at when evaluating data exposure.
Notes Text (long) Rec. Assessment findings, compensating controls, exceptions, and remediation status. Write here as if someone else is going to need to understand this record in six months without asking you.
03 Risk Levels & What They Mean

Risk levels are your categorical assessment of the overall exposure this vendor represents. They drive alert colors, stat cards, and the portfolio score. Use these definitions consistently so the register stays meaningful over time.

Low
1 — 34
Minimal exposure. Limited data access, strong contractual controls, no PII or sensitive data in scope. Vendor has established security program.
Review: Annually. No escalation needed.
Medium
35 — 54
Moderate exposure. Some business or client data in scope. DPA in place but gaps may exist. Monitoring and compensating controls recommended.
Review: Every 6 months. Note controls in record.
High
55 — 74
Significant exposure. PII, client data, or sensitive business data in scope. Requires formal DPA, usage policy, and documented compensating controls.
Review: Quarterly. Escalate to security lead.
Critical
75 — 100
Severe exposure. Broad access to regulated data, client conversations, credentials, or source code. Immediate formal assessment and executive sign-off required.
Review: Immediately. Leadership notification required.
Scoring guidance: Start with the data shared. PII or regulated data adds 20+ points immediately. Broad access (Full / Read-Write) adds 15+ points. No DPA or data processing agreement adds 20+ points. Shadow / unsanctioned status adds 15+ points regardless of other factors.
04 Access Level Definitions

Access level describes what the vendor's tool can technically do within your environment or with your data. Choose the highest level that applies, not the lowest.

API Only
API Only
You call their endpoint. You control exactly what gets sent in each request. The vendor processes what you submit and returns a response. This is the most controllable access pattern. Example: calling OpenAI or Anthropic via API with a sanitized prompt.
Read
Read
The vendor's tool can pull data from your systems but cannot modify or create records. Example: an AI analytics tool that reads ticket history to identify patterns. Lower risk than write access, but data exposure still applies.
Read / Write
Read / Write
The tool can read and modify data in your environment. This includes AI assistants that draft and send emails, update records, create tickets, or modify documents. Requires strong access controls and audit logging.
Full Access
Full Access
Broad system-level permissions. The tool can read, write, execute, or administer across multiple areas of your environment. Highest risk category. Requires formal assessment, DPA, and explicit executive approval before use.
05 Day-to-Day Workflow

Here is the standard operating procedure for maintaining the register. This should take less than 15 minutes per week in steady state.

1
Check the Alert Banners Weekly
Open the dashboard each Monday. The red and orange banners at the top tell you everything urgent. Expired assessments are a compliance gap that needs immediate action. Vendors within 30 days are your planning queue for the week.
▶ Red banner = expired. Take action this week.
2
Add New Vendors Immediately When Discovered
Any time a new AI tool is introduced, whether officially adopted or detected as shadow AI, add it to the register the same day. Use "Shadow / Unsanctioned" status and set risk score conservatively high until the assessment is complete. An incomplete record is better than no record.
▶ Shadow AI found? Add it now. Assess it this week.
3
Mark Reviewed After Each Assessment
Open the vendor detail and click "Mark Reviewed Today" after completing an assessment. This updates the last assessed date to today and automatically pushes the next due date out 6 months. Add any new findings or control changes to the Notes field before closing.
▶ Always update Notes when marking reviewed.
4
Export for Evidence Before Reviews
Before any compliance review, committee meeting, or client QBR where AI governance comes up, use the Export CSV button. The file is timestamped and includes every field. Attach it to your evidence package or share it directly. This is your proof that the process exists and is being followed.
▶ Export = your compliance artifact. Do it before every review.
5
Decommission, Don't Delete
When a vendor is removed from use, change the status to Decommissioned rather than deleting the record. Auditors sometimes ask about vendors that were in use during a past period. A Decommissioned record with notes about why it was removed is cleaner than a gap in your history.
▶ Use Decommissioned status. Only hard-delete if it was entered in error.
06 Compliance Framework Alignment

The dashboard footer lists four frameworks this register directly supports. Here is what each one requires and how the register addresses it.

NIST AI RMF
AI Risk Management Framework
The NIST AI RMF Govern, Map, Measure, and Manage functions all require documented identification of AI systems in use, their risk characteristics, and ongoing monitoring. This register directly satisfies the MAP function (inventory of AI) and MEASURE (risk scores and assessment cadence).
GOVERN 1.1 / MAP 1.1 / MEASURE 2.5
CMMC 2.0
Cybersecurity Maturity Model Certification
CMMC Level 2 requires supply chain risk management under CA.L2 and system/service acquisition controls under SA.L2. Any AI vendor processing CUI or operating within the CUI boundary must be assessed. This register is your evidence that external AI tools have been identified and evaluated.
CA.L2-3.12.1 / SA.L2-3.14.6
SOC 2
Trust Services Criteria CC9.2
CC9.2 requires that organizations assess and monitor vendor and partner risk. Auditors will ask for evidence of a vendor risk management process. This register, combined with exported CSV files and assessment notes, provides direct evidence of a functioning vendor risk program that covers AI tools specifically.
CC9.2 — Vendor Risk Management
ISO 42001
AI Management System Standard
ISO 42001 is the first international standard specifically for AI management systems. Clause 8 requires organizations to control AI-related processes and manage external AI providers. This register directly supports Clause 8.4 (AI system supply chain) and Clause 9 (performance evaluation).
Clause 8.4 / Clause 9.1
What this register alone does not do: It does not replace a formal vendor risk assessment questionnaire, a data processing agreement review, or a penetration test. It is the tracking and evidence layer that sits on top of those activities. Think of it as the process wrapper that proves the work happened and is being maintained.
07 Common Questions
How often should I reassess vendors?
The default cadence is 6 months, which the dashboard sets automatically when you mark a vendor reviewed. Critical and High risk vendors should be reviewed quarterly. Low risk vendors may be reviewed annually. The key is consistency. A register reviewed quarterly on a known schedule is far more defensible than one reviewed sporadically. If a vendor has a significant change, such as a new data sharing policy, a security incident, or a major product update, trigger an out-of-cycle review and note it in the record.
What counts as an AI vendor?
Any tool that uses machine learning or AI to process data you or your clients own. This includes: LLMs you call via API (OpenAI, Anthropic, Google), copilot-style tools built into software you already use (Microsoft Copilot, Grammarly, GitHub Copilot), AI features added to existing platforms (Zendesk AI, HubSpot AI, ConnectWise AI), meeting transcription tools (Otter.ai, Fireflies), and any internal agents or automations you build on top of AI platforms. If it's making inferences about data and that data has any business sensitivity, it goes in the register.
Where is the data stored? Is it secure?
Currently, data is stored in your browser's localStorage, meaning it persists on the machine where you entered it but does not sync across devices. This is intentional for simplicity in an initial deployment. For a team environment, the recommended path is to back the register with a SharePoint list. The dashboard structure is designed so the vendor array can be replaced with a SharePoint REST API call without changing the UI. For now, use the CSV export to create a backup any time significant changes are made.
What do I do when I find a shadow AI tool?
Add it immediately with Shadow / Unsanctioned status. Set the risk score conservatively (assume the worst until assessed). Document what you know in the Notes field, including who was using it and what data may have been shared. Then begin the assessment process. If the tool is genuinely high risk, notify the relevant stakeholders and determine whether it should be blocked, approved with controls, or allowed on a provisional basis while the assessment runs. Do not delete it or pretend you didn't find it. The documented discovery and response is what demonstrates a functioning governance program.
Can I show this to clients?
The dashboard is built for internal ops and committee review. You can show it to clients in a QBR or governance meeting context to demonstrate that you have a structured AI vendor management process. For a client-facing deliverable, the CSV export cleaned up in Excel or the print view is typically more appropriate than showing the live tool. If you want a polished client-facing version, a separate read-only view without the add/edit controls would be the right approach.
How does the portfolio risk score work?
The portfolio risk score shown in the ring is a simple arithmetic mean of all vendor risk scores currently in the register. It does not apply weighting based on vendor importance or data volume. This keeps it transparent and easy to explain. A score trending upward month over month is a signal that either higher-risk vendors are being added or existing vendors are being reassessed with updated (higher) scores. Either way it is worth investigating.