Knowledge Base · MSP Operations
MSP Command Center
Engineering Triage Console
Complete reference for understanding, operating, and extending the MSP Command Center — a three-console real-time operations dashboard covering alert triage, ticket management, engineer workload, and multi-week analytics. Built for MSP engineers and SOC staff who need situational awareness across all clients and platforms from a single screen.
3 CONSOLES CHART.JS 4.4 30s AUTO-REFRESH LIVE INCIDENT TICKER NINJA · AUVIK · S1 · FORTI · VEEAM · PSA DEMO DATA · RANDOMIZED

The MSP Command Center is a self-contained single-file HTML operations dashboard designed for MSP engineering and NOC teams. It provides a unified view of client health, active incidents, open tickets, engineer workload, and platform integration status across an entire client portfolio.

The tool runs as a static HTML file — no server required. Data is generated in-browser on each load and refresh cycle, simulating a realistic MSP environment with 10 clients, 5 engineers, and 6 integrated platforms. When wired to live APIs, the fetchData() function becomes the integration point for real operational data.

Primary Use Cases
  • NOC Wall Display: C1 on a large monitor for ambient situational awareness across all clients.
  • Morning Standup: C3 Analytics for weekly trends, SLA status, and MTTA/MTTR health.
  • Active Triage: C2 for ticket prioritization, engineer load balancing, and client drill-down during incident response.
  • QBR Prep: C3 charts export context for client-facing reporting.
What It Replaces
  • Context switching between NinjaRMM, Auvik, SentinelOne, PSA, and backup portals.
  • Manual status sheets tracking open tickets, P1 counts, and SLA risk per client.
  • Verbal standups — the topbar stats surface the same information at a glance.
  • Spreadsheet health tracking — the client risk list and score bars replace manual risk matrices.
Demo vs Live In its current form all data is generated by makeData() using randomized values on every load and refresh. The data is realistic in structure and proportion but not live. See Section 15 for the integration pattern to connect real API sources.

The dashboard is organized as three vertically stacked console views behind a shared topbar and tab strip. Only one console is visible at a time. Switching is instant with no page reload — CSS display:none/block toggled by switchConsole(n).

C1 · TAB 1
OPS HEALTH
  • Alert Summary 2×2
  • 24h Alert Trend Chart
  • System Health Bars
  • Platform Health Pie
  • Client Risk Overview
  • Risk Distribution Donut
  • Active Incident Feed
C2 · TAB 2
TRIAGE
  • Ticket Queue + Table
  • Priority Breakdown Chart
  • Engineer Workload Panel
  • Open/Closed per Engineer
  • Client Incident Drill-Down
  • API Integration Status
  • Quick Actions Panel
C3 · TAB 3
ANALYTICS
  • 7-Day Stacked Alert Volume
  • SLA Performance Bars
  • MTTA/MTTR 14-Day Trend
  • Incident Heatmap by Hour
  • Ticket Flow (Opened/Closed)
  • Client Health Distribution
Rendering Pipeline
1
DOMContentLoaded → triggerRefresh()
On page load, triggerRefresh() is called immediately. This fetches (or generates) data, then calls renderC1(), renderC2(), and starts the auto-refresh countdown.
2
fetchData() → makeData()
In demo mode, fetchData() resolves immediately by calling makeData(), which constructs the entire state object: platforms, incidents, clients, tickets, engineers, charts data, and SLA metrics.
3
renderC1() / renderC2()
C1 and C2 are rendered on every refresh cycle. C3 is only rendered when the user switches to it (lazy render via switchConsole(3)) and re-rendered on each subsequent refresh while visible.
4
Chart.js Initialization
Each chart is created via mkC(id, type, data, opts). This function destroys any existing chart on that canvas before creating a new one, preventing ghost canvas memory leaks on repeated refreshes.
5
Auto-Refresh Countdown
After rendering, startCountdown() starts a 30-second interval. At zero, triggerRefresh() is called again automatically, regenerating all data and re-rendering all panels.

Several UI elements persist across all three consoles: the topbar, the tab strip, and the incident ticker. These are always visible regardless of which console is active.

Topbar — Live Stats Strip

The topbar contains six real-time operational stats that update on every refresh cycle. These are the most important numbers for a shift lead to see at a glance without clicking into any console.

StatIDColorSource in DataMeaning
P1 Crittb-critREDTickets with priority p1Count of open P1 tickets right now. Zero is the only acceptable normal state.
P2 Warntb-warnORANGETickets with priority p2High-priority tickets needing same-day resolution.
Open Tixtb-ticketsCYANdata.openTicketsTotal open ticket count across all clients and engineers.
SLA %tb-slaGREENdata.sla.overallOverall SLA compliance rate. Below 90% triggers review.
MTTAtb-mttaCYANdata.mtta (minutes)Mean Time To Acknowledge. Target <15 min for P1/P2.
Clients Redtb-riskREDClients where risk === 'crit'Count of clients with a risk score below 50. Requires immediate attention.
Tab Strip

The tab strip sits below the topbar and provides console navigation. Each tab shows a live badge count:

  • C1 · OPS Health: Red badge showing total critical alerts (tCrit). Pulses when non-zero.
  • C2 · Triage: Red badge showing P1+P2 ticket count. Stays visible to surface triage urgency from any console.
  • C3 · Analytics: Static cyan "Charts" badge — no live count, charts are reference data.
  • Refresh Countdown: Right-aligned in the tab strip. Shows seconds remaining until next auto-refresh with an animated depletion bar.
Incident Ticker

The red-bordered scrolling ticker sits between the tab strip and the console content area. It displays the most recent 8 incidents from the current data set, auto-scrolling in a continuous loop.

  • Items are color-coded: red for crit, orange for warn, green for info.
  • Format per item: SEVERITY · PLATFORM · ISSUE TITLE · CLIENT
  • Hover pauses the scroll — CSS animation-play-state:paused on hover. Use this when a specific event needs closer inspection.
  • The ticker content is rebuilt on every refresh cycle via buildTicker().
Ticker Legibility on NOC Displays The ticker font size is fixed at 10px. On large wall displays, consider increasing the ticker height and font size in the CSS. The animation duration is 60 seconds for a full cycle — reduce to 30s for faster displays or increase for more readable scrolling.

C1 is the default landing view and the highest-density panel in the dashboard. It is designed to give a shift lead or NOC analyst complete situational awareness in under 10 seconds. The layout uses a 3-column grid: alert summary on the left spanning two rows, system health and client risk in the center/right, and incident feed spanning the full width at the bottom.

Alert Summary Panel — Left Column

The large left panel contains a 2×2 alert category grid with giant numbers at 56px, a scrollable alert item list below the grid, and a 24-hour trend sparkline at the bottom.

QuadrantColorSource FieldClick Action
CriticalREDdata.tCritopenAlertDetail('Critical') — drawer with 5 sample critical alerts
SecurityORANGEdata.secAopenAlertDetail('Security') — drawer with 5 sample security alerts
NetworkCYANdata.netAopenAlertDetail('Network') — drawer with network alert samples
BackupYELLOWdata.bkpFopenAlertDetail('Backup') — drawer with backup failure samples

Below the 2×2 grid, individual alert items appear as rows with dot color, label, description, separator dash, and count. These are also clickable and open the same alert category drawer.

The 24h Alert Trend chart at the bottom of this panel is a stacked line chart (Chart.js) showing critical, warning, and info alert counts by hour over the last 24 hours. It provides immediate visual pattern recognition — for example, a spike at 09:00 every morning suggests a scheduled task failure pattern.

System Health Panel — Center Top

This panel renders one row per integrated platform (NinjaRMM, Auvik, SentinelOne, FortiGate, Veeam, PSA). Each row contains:

  • A 3px left border (green/orange/red) indicating the platform's current health state.
  • A large score number (55–99 range, randomized) with color matching health state.
  • The platform icon and name with status label (OK / WARN / CRIT) below.
  • A horizontal bar showing the score visually relative to 100.
  • Right-aligned meta: uptime percentage and "last seen" timestamp.
Pulsing CRIT Panels Panels where any platform is in CRIT state receive the pulsecrit CSS animation class — a red glow ring that pulses every 2.5 seconds. This draws immediate visual attention to degraded systems without requiring the analyst to actively scan each row.
Platform Health Pie — Center Top Right

The right side of the System Health panel contains a live-refreshing pie chart showing the ratio of Operational / Degraded / Critical platforms. The center label displays the percentage of healthy platforms as a large number. A countdown badge shows when the pie will next refresh (independent of the main refresh cycle — it runs on its own 30-second startPieCountdown() timer started at page load).

Client Risk Overview — Center Bottom

This wide panel spans two columns and shows the full client portfolio sorted by risk score (worst first). Three headline KPIs appear above the list:

Avg RiskAverage risk score across all 10 clients. Calculated as the mean of all client score values. Displayed in orange — any average above 60 is considered healthy, below 50 requires review.
With IncidentsCount of clients with at least one active security incident (secInc > 0). Shown in red. Any non-zero count warrants immediate C2 triage review.
Top AlertingThe client with the highest open alert count (openAlerts). Name shown in cyan below the count. This is your fire — the client generating the most noise right now.

Each client row in the list shows rank, name, risk score, a mini score bar, and warning/critical badges for backup failures and security incidents. Clicking any row opens the Client Detail Drawer with a 4-cell stat grid and risk factor bars.

The Risk Distribution Donut on the right side of this panel shows the count of clients in each score bucket: <30 (dark red), 30–50 (red), 50–70 (orange), 70–85 (light green), 85+ (bright green). Use this to communicate portfolio health posture at a glance.

Active Incident Feed — Full Width Bottom

The bottom-spanning panel shows all active incidents from the current data set in chronological order (most recent first). Each row has a colored left bar, incident title, source platform badge, client name, timestamp, and severity badge. Clicking any row opens the Incident Detail Drawer with full context and action buttons.

C2 is the working console — where engineers actually manage their queue. It uses a two-column layout: the full ticket queue on the left, and a stacked set of right-column panels covering engineer workload, client detail drill-down, and integration status.

Ticket Queue Panel

The ticket queue is the core of C2. It contains four KPIs, a priority breakdown bar chart, and a scrollable sortable table.

KPIColorSourceNormal Range
P1 OpenREDCount of p1 tickets0 — any P1 should already be in active response
P2 OpenORANGECount of p2 tickets0–2 on a normal shift
QueueCYANTotal open tickets8–15 for a 5-engineer team
ClosedGREENdata.closedTodayShould track closely with opened count

The Priority Breakdown Bar Chart is a compact horizontal bar showing the split of open tickets across P1/P2/P3/P4. Each priority uses its color token. The chart is 65px tall — just enough for visual proportion without taking up table space.

The Ticket Table below the chart has columns: Priority, Client, Issue, Engineer, SLA, Status. Each row is clickable and opens the Ticket Detail Drawer. The SLA column shows time remaining with color coding: green (healthy), orange (under 1 hour), red (breached, shown as negative).

🔴
SLA Breach Indicator When slaMin < 0, the SLA cell displays BREACHED in red. This means the ticket's response or resolution SLA has already expired. These should be visible to a shift lead immediately — consider adding an audible alert or Slack webhook for breached P1s in a live integration.
Engineer Workload Panel

The engineer workload panel shows three KPIs and a per-engineer bar visualization.

AvailableEngineers with open < 3 tickets. Green. These engineers can take new escalations.
BusyEngineers with 3 ≤ open < 6. Orange. Productive load — monitor but don't reassign yet.
OverloadedEngineers with open ≥ 6. Red. Tickets should be redistributed immediately. Use the Ticket drawer's Reassign action.

Each engineer row shows name, a load bar (width proportional to open ticket count vs max), a heavy/mid/ok badge, and a OPEN/CLOSED count in monospace. The grouped bar chart below the list shows open vs closed per engineer — a quick visual for identifying both overloaded engineers and strong performers closing more than they open.

Client Incident Details

The dropdown in this panel lets an engineer select any client to instantly view a 4-cell stat grid (Risk Score, Open Alerts, Backup Fails, Security Incidents) and a device status list showing the first 6 devices sorted by criticality. This eliminates the need to open a separate portal to check on a client during a triage call.

API Status & Automation Panel

Split into two columns — Integrations on the left and Quick Actions on the right.

The Integrations column shows a live status dot for each connected platform (NinjaRMM, Auvik, SentinelOne, FortiGate, PSA). Status is randomized in demo mode, with a ~10% chance of WARN and ~5% chance of CRIT on each refresh. A pulsing red CRIT dot indicates the platform is unreachable or returning error responses.

The Quick Actions column provides common workflow shortcuts:

ActionIconFires
Create Ticket🎫toast('▶ Creating ticket…') — wire to PSA API in live mode
Acknowledge Alerttoast('▶ Acknowledging…') — wire to RMM acknowledge endpoint
Run Scripttoast — wire to RMM script execution API
Escalatetoast — wire to PSA escalation or PagerDuty
Refresh APIstriggerRefresh() — same as the topbar refresh button

C3 is the historical and trend view. It is the only console that renders lazily — it only initializes its charts when the user first switches to it, and re-renders on each subsequent auto-refresh while it's active. This prevents Chart.js from consuming canvas resources unnecessarily when the NOC has C1 up all day.

Lazy Rendering C3 charts are created inside renderC3(d), which is only called from switchConsole(3). If you call triggerRefresh() while on C1 or C2, C3 charts will not re-render until you switch to that tab. This is intentional for performance.
7-Day Alert Volume by Platform

A stacked bar chart spanning two columns showing daily alert counts per platform category (Endpoint, Network, Security, Firewall, Backup, Service Desk) over the last 7 days. Four headline KPIs appear above: Total 7d, Peak Day count, Resolved count, and Daily Average. Each platform category uses a distinct color in the stack.

Use this chart to identify platform-specific alert spikes — for example, if Backup has a consistently high bar on Wednesdays, a scheduled backup job is likely failing.

SLA Performance

Horizontal bar chart showing SLA compliance percentage per priority tier (P1, P2, P3, P4) with an overall bar at the top. Two headline KPIs: Overall % and P1 % (the most sensitive metric).

PriorityTypical SLA TargetAlert ThresholdColor in Chart
P1 Critical15 min response / 4h resolutionBelow 85%RED
P2 High1h response / 8h resolutionBelow 90%ORANGE
P3 Medium4h response / 2d resolutionBelow 95%YELLOW
P4 Low8h response / 5d resolutionBelow 98%BLUE
MTTA / MTTR 14-Day Trend

A dual-line chart over 14 days showing Mean Time To Acknowledge (blue, target <15 min) and Mean Time To Resolve (green, target <60 min for P1/P2). Four KPIs above: current MTTA, current MTTR, escalations this week, and first-pass resolution rate.

MTTAAverage minutes from ticket creation to first engineer acknowledgment. A rising MTTA trend means the team is getting behind — either understaffed or tickets are landing outside business hours.
MTTRAverage minutes from ticket creation to full resolution. Includes both response and fix time. Spikes correlate with complex incidents or P1 escalation chains.
EscalationsTickets escalated to L2/L3 or vendor support in the current week. A rising escalation rate can indicate training gaps or an unusually complex issue type.
First PassPercentage of tickets resolved without reassignment or escalation. Target >85%. Below 70% suggests triage is routing tickets to the wrong engineers.
Incident Heatmap by Hour

A bar chart showing incident count by hour of day, averaged over the last 7 days. Hours 08:00–18:00 generate significantly more incidents (R(2,10) vs R(0,3) in demo). In production this chart reveals shift patterns — if your 22:00 bar is high, you have overnight monitoring gaps or an early-morning automated task that generates false positives.

Ticket Flow — Opened vs Closed

A two-line area chart showing daily ticket opened (orange) and closed (green) counts over 7 days. Four KPIs: total opened, total closed, current backlog, and CSAT %. A persistent gap between opened and closed lines indicates backlog growth — the team is not keeping pace with incoming work.

Client Health Distribution

A bar chart showing count of clients in each risk score bucket: <30, 30–50, 50–70, 70–85, 85+. Color graduated from dark red to bright green. Two headline KPIs: Healthy count (score ≥ 70) and Critical count (score < 50). Use this for portfolio-level health reporting in QBRs.

The detail drawer is a 420px right-side panel that slides in over a semi-transparent overlay. It is the primary drill-down mechanism — every clickable row in every panel opens a drawer with contextual details and action buttons. Only one drawer can be open at a time. It persists across console switches until explicitly closed.

Drawer Types
TriggerFunctionContentsActions
Click alert category (C1)openAlertDetail(label)5 recent alerts of that category with severity badges, titles, client, timestampNone — read-only
Click client row (C1)openClientDrawer(name)4-cell stat grid + risk factor bar chart (Open Alerts, Backup Health, Patch Gap)None — read-only
Click incident row (C1)openIncDrawer(id)4-cell stat grid (Severity, Time, Platform, State) + detail rows (ID, Client, Issue)Create Ticket · Acknowledge · Escalate
Click ticket row (C2)openTktDrawer(id)4-cell stat grid (Priority, SLA, Engineer, Status) + detail rows (ID, Client, Issue)Update · Reassign · Close
Closing the Drawer
  • Click the overlay (dark background behind the drawer) — onclick="closeDrawer()"
  • Click the ✕ Close button in the drawer header
  • Press Escape — keyboard shortcut bound globally
Risk Factor Bars (Client Drawer)

The client drawer renders animated bar fills using data-w attributes. On drawer open, a 80ms timeout triggers a CSS width transition from 0% to the calculated percentage. The bars represent:

  • Open Alerts: min(100, openAlerts × 8) — 13 alerts = 100% red bar.
  • Backup Health: 100 − (backupFails × 25) — 4 failures = 0% green bar.
  • Patch Gap: 100 − patchFail — 25% patch gap = 75% orange bar.

The dashboard has two independent refresh timers: the main data refresh and the platform health pie refresh.

Main Refresh (30 seconds)

Controlled by the RINTERVAL constant (default: 30) and the rcountdown variable. The countdown bar in the tab strip depletes linearly over 30 seconds, then triggers triggerRefresh(). In demo mode this regenerates all random data. In live mode, this is where your API calls fire.

Refresh Interval — JS State
// Top of script — adjust RINTERVAL to change the refresh cadence
var DATA=null, CHARTS={}, RINTERVAL=30, rtimer=null, rcountdown=RINTERVAL, curCon=1;

// To change to 60-second refresh:
var RINTERVAL = 60;

When a manual refresh is triggered (topbar button or Ctrl+R), the countdown resets to 30 and the cycle restarts from zero. The refresh button shows ↻ … and is disabled while the fetch is in progress to prevent stacking calls.

Platform Health Pie (Independent)

The pie chart in C1 has its own countdown (startPieCountdown()) that runs independently of the main refresh. It starts immediately on DOMContentLoaded via triggerRefresh().then(startPieCountdown). The countdown is displayed as a text label below the pie ("Auto-refresh Xs"). When the pie timer expires it calls triggerRefresh(), which resets both timers.

In Live Mode: Rate Limit Awareness At 30-second intervals, each refresh cycle fires one call per integrated platform (NinjaRMM, Auvik, SentinelOne, FortiGate, Veeam, PSA) = 6 API calls per 30 seconds = 720 calls per hour. Most RMM platforms allow 1000–2000 calls/hour. Monitor your API quota before reducing RINTERVAL below 15 seconds.

All keyboard shortcuts are bound in the global keydown event listener at the bottom of the script. They are always active regardless of which console is visible.

ShortcutActionNotes
Alt + 1Switch to C1 · OPS HealthWorks from any console. Triggers switchConsole(1).
Alt + 2Switch to C2 · TriageWorks from any console. Triggers switchConsole(2).
Alt + 3Switch to C3 · AnalyticsAlso triggers lazy chart render if first visit to C3.
Ctrl + R or + RManual data refreshe.preventDefault() suppresses browser reload. Calls triggerRefresh().
EscapeClose the detail drawerCalls closeDrawer(). Has no effect if no drawer is open.
Startup Toast On load, a toast notification briefly appears: "C1 · OPS Health C2 · Triage C3 · Analytics | Alt+1/2/3 or tap tabs". This is a one-time hint for new users and disappears after 3.5 seconds. To disable it, remove the toast(…) call at the end of the DOMContentLoaded handler.

All data is produced by makeData() and stored in the global DATA object. Understanding this structure is essential for wiring live API integrations — each field maps to a specific panel or chart in the UI.

Top-Level Fields
platforms[]Array of 6 platform health objects. Each has: id, name, icon, cat, sc (score 55–99), status (ok/warn/crit), crit, warn, up (uptime%), last.
tCritTotal critical alert count across all platforms. Used in topbar, C1 alert summary, and tab badge.
tWarn, secA, netA, bkpF, infoCWarning alerts, security alerts, network alerts, backup failures, informational alerts. Used in the C1 2×2 grid and alert items list.
incidents[]5–12 incident objects. Each: id (INC-0xxxx), title, source, client, sev (crit/warn/info), time, acked (bool).
clients[]10 client objects sorted worst-first. Each: name, score, risk, openAlerts, backupFails, secInc, patchFail, devices[].
tickets[]8–18 ticket objects sorted P1→P4. Each: id (TKT-xxxxx), priority, client, issue, engineer, slaMin, status.
engineers[]5 engineer objects. Each: name, open, closed, load (ok/mid/heavy).
slaObject with keys p1, p2, p3, p4, overall — each a percentage (75–100).
mttaTrend[], mttrTrend[]14-element arrays of minute values for the MTTA/MTTR trend chart.
heatmap[]24-element array {h: hour, count: incidents}. Business hours (08–18) are higher.
ticketFlow[]7-element array {opened, closed} for the 7-day ticket flow chart.
day7data[]Per-platform 7-day alert counts for the stacked bar chart.
Seeded Clients & Engineers

The 10 demo client names and 5 engineer names are defined as top-level arrays and reused across all data generators:

Demo Data Seed Arrays
var cNames = [
  'Northgate Industries', 'VertexBio LLC', 'Cascade Dental',
  'Harmon Logistics',    'PeakStar Finance',  'Blue Horizon MSO',
  'Apex Legal Group',    'Ridgeline Schools', 'Summit Healthcare',
  'CoreTech Corp'
];

var engs = ['Alex R.', 'Morgan K.', 'Jamie L.', 'Taylor S.', 'Chris P.'];

// To add clients, extend cNames. To add engineers, extend engs.
// Both arrays are used in pick() calls throughout makeData().

Each client has a computed risk score (0–100) and a derived risk tier. Lower scores = higher risk. This is an inverse health score — 100 means perfectly healthy, 0 means in crisis.

Score Formula
Client Risk Score Calculation
score = Math.max(5,
  100
  - openAlerts  × 4   // Each open alert subtracts 4 points
  - backupFails × 8   // Each backup failure subtracts 8 points
  - secInc      × 15  // Each security incident subtracts 15 points
  - Math.round(patchFail / 4)  // Patch gap penalty (0–25% gap = 0–6 pts)
);

// Risk tier assignment:
risk = score < 50 ? 'crit' : score < 75 ? 'warn' : 'ok';
Score Tiers
Score RangeTierColorMeaningUrgency
85 – 100HEALTHYBright greenNo significant issues. Routine maintenance only.None
70 – 84OKGreenMinor issues present. Monitor for degradation.Scheduled review
50 – 69WARNOrangeMultiple issues present. Active remediation needed.This shift
0 – 49CRITRedClient in significant risk. Security incident likely active.Immediate
Security Incidents Have Heavy Weight A single security incident subtracts 15 points. A client with 2 security incidents and zero backup failures or patch gaps will still score 70 — the WARN threshold. In live production environments, consider increasing the secInc multiplier to 20–25 to ensure any security incident forces a client into the CRIT tier for mandatory escalation.

SLA data appears in three places: the topbar overall SLA%, the C2 ticket table SLA column, and the C3 SLA Performance chart. Each has its own data source and calculation.

Per-Ticket SLA (C2 Table)

Each ticket has an slaMin value representing minutes remaining until SLA breach. Negative values mean the ticket has already breached. The display format and color are determined as follows:

SLA Display Logic
// slaMin ranges by priority (demo)
P1: R(-30, 180)   // Can start already breached
P2: R(20, 480)    // 20 min to 8 hours
P3/P4: R(60, 1440) // 1 hour to 24 hours

// Display format
if (slaMin < 0) → "BREACHED"  (color: red)
if (slaMin < 60) → "47m remain"  (color: orange)
else → "2h 15m remain"  (color: green)
Aggregate SLA (C3 Chart)

The C3 SLA chart uses the data.sla object, which contains independent percentage values per priority tier generated on each refresh. These represent the percentage of tickets within that tier that met their SLA requirement over the rolling period. In a live integration, these would come from your PSA's SLA reporting API.

Topbar SLA% (Overall)

The topbar SLA stat is data.sla.overall — a single number representing blended SLA compliance. It is rendered as a percentage with a % suffix. The color is always green in the topbar regardless of value, since any degradation is better surfaced via the C3 chart where context is available.

Six platforms are modeled in the demo environment. Each maps to a real tool category and represents a live API integration point when the dashboard is deployed in production.

PlatformIDCategoryData FieldsLive API Pattern
NinjaRMMninjaEndpointAgent status, CPU/RAM, patch compliance, alertsNinjaOne REST API — GET /v2/alerts
AuvikauvikNetworkDevice status, interface health, network alertsAuvik API — GET /v1/inventory/network
SentinelOnes1SecurityThreat detections, agent health, isolation statusS1 API — GET /web/api/v2.1/threats
FortiGatefortiFirewallInterface status, policy hits, VPN tunnelsFortiOS REST API — GET /api/v2/monitor/system/status
VeeambackupBackupJob results, backup failures, restore pointsVeeam REST API — GET /api/v1/jobs/states
PSApsaService DeskTickets, SLA, assignments, CSATConnectWise/Autotask/HaloPSA — ticket APIs
Wiring a Live Integration

Replace the makeData() body with API calls. The fetchData() function is already structured as an async function — add your fetch calls there and return a DATA-shaped object:

Live Integration Pattern
async function fetchData() {
  // Replace makeData() with real API calls
  const [ninjaAlerts, s1Threats, pTickets] = await Promise.all([
    fetch('/api/proxy/ninja/v2/alerts').then(r => r.json()),
    fetch('/api/proxy/s1/threats').then(r => r.json()),
    fetch('/api/proxy/psa/tickets?status=open').then(r => r.json())
  ]);

  // Map to DATA shape
  return {
    tCrit: ninjaAlerts.filter(a => a.severity === 'critical').length,
    incidents: s1Threats.map(mapS1ToIncident),
    tickets: pTickets.map(mapPsaToTicket),
    // ... rest of fields
  };
}

// All rendering functions consume DATA — no changes needed there.
Proxy Requirement All MSP platform APIs require authentication and return CORS-blocked responses from a browser. You will need a lightweight server-side proxy (Azure Function, Cloudflare Worker, or Express) to relay API calls. The dashboard's fetchData() calls your proxy endpoint — the proxy handles credential management and API authentication.

The color system is consistent across all three consoles, the ticker, the drawer, badges, and Chart.js datasets. Every element that communicates urgency uses one of five semantic tokens.

--crit #ff3b3b · P1 · CRIT · Critical alerts · SLA breached
--high #ff8c00 · P2 · WARN · Backup fails · Security alerts
--med #f0d000 · P3 · Backup alerts · Heatmap peak
--low #3de88e · Healthy · Online · SLA met · Closed tickets
--blue #4a9eff · P4 · Info incidents · MTTA line · API status
--accent #00b4d8 · UI accent · Panel dots · Drawer titles · Network alerts
Consistent Use Matters The color contract is load-bearing — engineers scan red/orange/yellow as a priority gradient without reading text. Never use --crit red for non-urgent UI elements. If you add a new alert type or dashboard section, map it to one of the six existing tokens.
Badge Classes
CSS ClassColorUse Cases
.ctab-badge.critREDTab strip alert counts
.inc-badge.critREDIncident feed severity
.pri-badge.p1REDTicket table priority
.pri-badge.p2ORANGETicket table priority
.pri-badge.p3YELLOWTicket table priority
.pri-badge.p4BLUETicket table priority
.cob.critREDClient list mini badge
.cob.warnORANGEClient list mini badge

The dashboard ships in demo mode. All data is generated by makeData() using JavaScript's Math.random() seeded with bounded ranges that produce realistic but entirely fictional operational data.

Demo Behavior Summary
  • Every refresh regenerates all data — client scores, incident counts, ticket queues, and chart values all change on each 30-second cycle. There is no persistence between refreshes.
  • Action buttons fire toasts only — Create Ticket, Acknowledge, Escalate, Update, Reassign, and Close all show a toast notification but do not call any external API.
  • API status dots are ~90% OK — with a ~10% chance of WARN and ~5% chance of CRIT per platform on each refresh. This gives the appearance of occasional platform degradation without being always-red.
  • The KTC Demo Bar (yellow "DEMO VERSION" notice) is injected via the ktc-home-bar-style block and the #ktc-demo-bar div. Remove both to eliminate this notice in a production deployment.
Transitioning to Live Mode
1
Deploy a proxy service
Create an Azure Function, Cloudflare Worker, or Express server that accepts requests from your dashboard host and relays them to each platform API with the appropriate authentication headers.
2
Replace makeData() with real fetches
Modify fetchData() to call your proxy endpoints. Map each API response to the DATA object shape documented in Section 10. Keep makeData() as a fallback for when APIs are unreachable.
3
Wire action buttons
In each drawer's action button onclick handlers, replace toast('▶ …') with real API calls: POST /api/proxy/psa/tickets for Create Ticket, PATCH /api/proxy/ninja/alerts/{id}/ack for Acknowledge, etc.
4
Add error handling
The existing triggerRefresh() already wraps fetch in a try/catch and fires toast('⚠ API error: …', true) on failure. Add per-platform error states to the API status panel for more granular failure visibility.
5
Remove the demo banner
Delete the #ktc-demo-bar div and the <style id="ktc-home-bar-style"> block. Remove the body { padding-top: 36px !important; } override — the topbar's own sticky positioning handles page offset without it.
6
Verify the nav.js dependency
The dashboard loads <script src="../nav.js" defer></script> from its parent directory. This file powers the site-wide navigation bar that appears above the topbar. If you deploy this console outside the standard folder structure, ensure nav.js is present one directory level up — or update the src path to an absolute URL. A missing nav.js will produce a 404 in the browser console but will not break the console itself — the dashboard renders and functions fully without it. It only affects the site navigation overlay.
Common Issues
404 error for nav.js in browser consoleThe file loads ../nav.js relative to its location. If deployed to a folder where nav.js is not one directory up, the browser logs a 404. The console still renders fully — nav.js only drives the site navigation overlay. Either place nav.js in the correct relative path, update the script src to an absolute URL, or remove the script tag if the site nav is not needed in your deployment.
Charts not rendering on C3 first visitC3 renders lazily. If you switch to C3 before the initial triggerRefresh() completes, DATA is null and renderC3() exits early. Wait for the initial toast to dismiss (data is loaded), then switch to C3.
Memory leak after long wall display sessionEach refresh destroys and recreates Chart.js instances via mkC(). If you have custom charts that bypass this function and create charts directly, call chart.destroy() before creating new instances. Long-running sessions (8+ hours) can accumulate ~50MB without this.
Alt+1/2/3 not working on MacOn macOS, Alt+1/2/3 may be intercepted by the OS or browser for special character input (¡™£). Test in full-screen mode (F11) or remap the shortcuts to Ctrl+1/2/3 in the keydown handler.
Topbar stats show "–" on loadThe initial render shows dashes before the first fetch completes. This is expected — the animEl() function is called in renderC1/C2 after data arrives and animates the values from 0 to their targets over 700ms.
Ticker not scrolling on iOSThe CSS animation: tscroll 60s linear infinite can stall on some iOS Safari versions. Add -webkit-animation prefix and ensure the containing element has overflow: hidden (it does by default). If it still fails, JavaScript-based scrolling via setInterval is the fallback.
C3 analytics not refreshing automaticallyC3 only re-renders when it is the active console during a refresh cycle. If you're on C1 when the 30s timer fires, C3 is not re-rendered. Switch to C3 and wait for the next cycle, or trigger a manual refresh while on C3.
Layout breaks on monitors under 1024px wideThe 3-column C1 grid (300px 1fr 1fr) requires at least 700px of content width. Below 1200px, breakpoints collapse to 2-column. Below 700px, all grids become single-column. The tool is optimized for 1080p and 4K NOC displays.
Customization Quick Reference
What to ChangeWhereHow
Refresh intervalTop of <script>Change RINTERVAL = 30 to desired seconds
Client namesvar cNames = [...]Replace array values with your actual client names
Engineer namesvar engs = [...]Replace with your team members
Platform listvar pdefs = [...]Add/remove platform objects — each needs id, name, icon, cat
Issue typesvar issues = [...]Replace with your most common ticket issue categories
Risk score weightsmakeData() → client mapAdjust the multipliers: openAlerts×4, backupFails×8, secInc×15
Color tokens:root { } CSS blockChange hex values for --crit, --high, --med, --low, --accent
Startup toastEnd of DOMContentLoadedRemove or modify the toast(…) call
KB // MSP COMMAND CENTER · ENGINEERING TRIAGE CONSOLE 3 CONSOLES · CHART.JS 4.4 · 30s AUTO-REFRESH · KTC-DEMO