SPX Console
Knowledge Base
v1.0 · SPX
// Knowledge Base · ShadowProtect SPX
SPX Console Reference Guide
Complete operational reference for the ShadowProtect SPX Production Console — a single-file HTML dashboard for MSP technicians to monitor backup health, triage failures, generate ticket notes, and manage StorageCraft partner portal data across all clients in one view.
Format: Self-contained .html
Auth: Basic Auth via local proxy
Data source: StorageCraft partner portal
Runtime: Browser (no install)
01 // What Is This Tool

The ShadowProtect SPX Console is a self-contained HTML file that gives MSP technicians a unified real-time view of backup status across every client managed through the StorageCraft / ShadowProtect partner portal. It replaces manual portal browsing with a purpose-built dashboard that surfaces risk, generates ticket notes, and drives consistent daily NOC triage.

Single File · No Install
The entire application — UI, logic, styles, and data layer — lives in one .html file. Open it in any modern browser. No server, no npm, no build step.
Portal-Scraping Data Layer
Reads data from the StorageCraft admin console HTML pages (/admin-console/admin-accounts-list, /admin-console/admin-sessions-list) using a local reverse proxy to handle CORS and session injection. No official JSON API is required.
Demo Mode — No Credentials Required
Ships with 12 fully-configured mock clients across 3 vaults, including critical failures, vault-offline scenarios, onboarding clients, and drift examples. Opens directly into demo data with no login needed.
What it does well
Instant bulk status across all clients with one scan
Auto-generated, copy-ready ticket notes per client
Risk scoring + drift detection with no configuration
7-theme UI that adapts to any NOC environment
Vault topology awareness — detects upstream failures
Limitations to know
Requires the local proxy launcher (node main.js) for live data
Data is parsed from HTML tables — column name variations may need mapping
No persistent data storage — session state only, clears on tab close
Portal structure changes may break parsers without code update
02 // Architecture

The console sits entirely in the browser. It fetches data from the StorageCraft portal through a local Node.js reverse proxy that injects Basic Auth session cookies and strips CORS restrictions. No data is stored beyond the browser session.

Request flow
Browser (console.html) │ └─ fetch('/proxy?path=/admin-console/admin-accounts-list') │ ▼ localhost:3000 (node main.js) │ Injects: X-Target-Host, Basic Auth cookies │ (secureEfolderingDotCom, EFSB) │ └─► https://backup.securewebportal.net /admin-console/admin-accounts-list → HTML table /admin-console/admin-sessions-list → HTML table /admin-console/status-report → HTML report
Layer What it does File / Component
UI Shell All HTML/CSS rendering, panels, gauges, theming console.html
Data Parser parsePortalTable() — extracts rows from HTML tables. parseAccounts(), parseSessions() normalise columns console.html (inline JS)
API Object The api object wraps all portal fetches: clients(), queryAll(), check(id), statusReport() console.html (inline JS)
Risk Engine computeRisk() and computeDrift() produce numeric scores from session trends console.html (inline JS)
Proxy Reverse-proxies all requests to the portal, injects credentials, handles CORS main.js (Node.js, separate)
Credential Store Uses browser Credential Management API (PasswordCredential) with sessionStorage fallback Browser storage
03 // Header & Controls

The sticky header contains every primary action. Left to right:

ElementFunction
Tech Name Free-text field for the technician's name. Persisted to localStorage. Stamped at the bottom of every generated ticket note.
Save Saves the tech name to localStorage and, if a ticket note exists, immediately rebuilds it with the updated name.
Theme swatches 8 color themes (Ice, Red, Matrix, Purple, Amber, Cobalt, Mono, Tesla Black). Selection persists in localStorage under key spTheme.
Last Scan Badge showing time since the most recent successful check. Turns green when under 30 minutes.
↺ Refresh Re-fetches the client list from the portal (accounts + onboarding check in parallel) without running a full session scan. Use at session start.
◈ Scan All Fetches both the accounts list and full session history simultaneously, scores every client, and populates all five gauges. Shows the loading modal with live progress.
⬡ API Key Opens the credential modal to enter or update portal Host / Username / Password. Also accessible when already connected to update credentials.
⊡ Zen Hides the gauges, NOC row, and header to expand the main panels to full screen. Toggle with Ctrl+Shift+Z. Press again or click Exit Zen to restore.
04 // Gauge Strip

Five donut-chart gauges run across the top. Each is also a clickable filter — clicking a gauge filters the client list to only matching clients. Click again to clear the filter.

Total Devices
12
All tracked clients
Backup Health
72%
8 ok · 3 failed
Backed Up <24h
6
55% of active
Onboarding
1
8% of all clients
Issues
5
3 critical · 1 concerned
GaugeFormulaWhat triggers color change
Total Devices Count of all allClients entries Always blue — informational only
Backup Health okN / (okN + failN) — excludes ONBOARDING clients Green ≥90%, Yellow ≥70%, Red below 70%
Backed Up <24h Count of non-onboarding clients where lastSuccessTime is within 24 hours. Falls back to nothing — only real success timestamps count. Green ≥90%, Yellow ≥70%, Red below 70%
Onboarding Count with lastConcern === 'ONBOARDING' Always purple — no alarm threshold
Issues Count with CONCERNED, CRITICAL, or WATCH status (not ONBOARDING) Green = 0, Yellow = any issues but no CRITICAL, Red = any CRITICAL
Gauges need Scan All to fully populate
The Backup Health and Issues gauges require session history data to compute — they rely on lastConcern, which is only set after a Scan All or an individual client check. After a plain Refresh, those gauges will show "Run Scan All" until a full scan runs. Total Devices and Backed Up <24h can populate from client list data alone.
05 // NOC Operations Row

The thin row between the gauges and the main panels is the NOC operations bar. It provides at-a-glance triage data and quick filter controls without requiring you to interact with the client list.

SectionWhat it shows
⚠ Risks Top 4 highest-risk clients by computed risk score. Each chip is clickable — selecting one opens that client's detail panel. Populated after a Scan All.
Vaults One chip per vault. Green = healthy, Yellow = degraded replication or high capacity, Red = offline. In live mode, vaults are derived from client-to-vault groupings. In demo mode, uses DEMO_VAULTS with Vault-03 forced offline.
Filter 5 quick filter buttons — All, Critical (risk ≥60), Failures (risk ≥30), Drift (last success ≥24h ago), Healthy (risk <21, not onboarding). Overrides gauge filter.
Auto-scan Starts a 5-minute countdown timer. When it fires, re-runs a full Scan All and resets the counter. Useful for unattended NOC screens. The countdown displays as M:SS.
06 // Client List Panel

The left panel shows all clients as rows. Each row has a colored left border indicating status, a risk score badge, and optional drift badge. Clicking a row runs a live check and opens the detail view. The panel can be collapsed with the button to give the detail panel more space.

Search
Debounced 150ms client-side filter on client name. Shortcut: Ctrl+/ to focus.
Sort options
Name A–Z, Status (Critical first) — uses CONCERN_ORDER map, Last Checked, Risk Score — highest first.
Left border color
Blue = OK, Yellow = WATCH, Orange = CONCERNED, Red = CRITICAL, Purple = ONBOARDING, Gray = Unknown.
Risk badge
Numeric score 0–100. Green <30, Yellow 30–59, Red ≥60. Only shown when score >0.
Drift badge
Shows Xh DRIFT or X.Xd DRIFT when last success was ≥24h ago and client is not ONBOARDING.
Shift+Click for multi-client compare
Holding Shift while clicking a client adds it to the compare view in the detail panel's Compare tab. Use Load All Scanned to populate the compare grid automatically.
07 // Detail Panel

The right panel loads when a client is selected. It runs a live api.check() call, parses the session list for that client, and renders three tabs.

TabContents
◈ Overview Concern banner with summary and triage reason. Backup statistics accordion (last success, last failure, consecutive fails, 7-day rate, overall rate). Snapshot sparkbar (14 most recent backups, color-coded). Backup history chart (30-day bar + success-rate line). Action buttons. Ticket note (auto-generated, editable additional notes). Compare mini-grid.
⬡ AI Insights Rule-based analysis panel: predicted failure risk score, estimated hours to next failure, suggested action text, and tags (Consecutive Fails, High Fail Rate, Stale Backup, Healthy, No Activity). Fully deterministic — no LLM required.
⊞ Compare Mini cards for Shift+clicked clients or all scanned clients (via Load All Scanned button). Shows name, type, status badge, and 7-day fail estimate per card.
Concern banner colors
OK
All backups healthy — no action required. Continue standard monitoring cadence.
WATCH
Minor anomaly trend. Monitor closely for 24–48h. No action required unless failures continue.
CONCERNED
Recurring failures trending upward. Verify agent, credentials, and storage capacity.
CRITICAL
Multiple consecutive failures or vault offline. Immediate attention required.
ONBOARDING
No backup activity in 30 days. Verify agent installation and backup policy assignment.
08 // Failure Queue

The ⚠ Queue tab in the left panel surfaces every client with a non-zero risk score, sorted by risk score descending, then by most recent failure time. It functions as a prioritized work queue for the NOC session.

Queue badge
The red number on the tab shows the count of CRITICAL clients (risk ≥60). Disappears when no critical items exist.
Row colors
Red border = CRITICAL, Yellow border = WARN, Purple border = vault issue, no border = informational.
Score chip
Numeric risk score shown on the right of each row. Green <30, Yellow 30–59, Red ≥60.
Clickable rows
Clicking any row in the queue fires runCheck() for that client, identical to clicking in the client list.
09 // Concern Levels

Concern levels are derived by deriveConcern(data, fromBulk) from session history, trend data, and timestamps. The function is called both in bulk scan mode and per-client check mode.

LevelTrigger ConditionsPriority
OK Outcome is OK, no failures, 7-day fail rate = 0% Lowest
WATCH 7-day fail rate >0% but <20%, consecutive fails = 0 Low
CONCERNED Consecutive fails ≥1, OR 7-day fail rate ≥20%; outcome is NOT OK Medium
CRITICAL Consecutive fails ≥3, OR 7-day fail rate ≥50%, OR vault offline Highest
ONBOARDING No session activity in the last 30 days; overrides all scoring Separate
UNKNOWN Bulk scan mode with no direct concern field and insufficient data to classify N/A
Bulk vs. Per-Client Mode
In bulk scan mode (fromBulk=true), the derivation does not use snapshot timestamps to detect ONBOARDING — those aren't available from the sessions list alone. ONBOARDING is only reliably detected via the dedicated onboardingClients() call (accounts without sessions in the last 30 days) or during a per-client check().
10 // Risk Scoring

The risk engine runs in computeRisk(client) on every client in the list and returns a score from 0–100. Scores drive the NOC risk chips, the failure queue sort order, and the filter buttons. ONBOARDING clients are always forced to score 0.

Vault Offline
+35 pts
≥3 Consec. Fails
+40 pts
1–2 Consec. Fails
+20 pts
Fail Rate ≥50%
+25 pts
Fail Rate 20–49%
+12 pts
Drift ≥72h
+25 pts
Drift 36–72h
+15 pts
Drift 24–36h
+8 pts
Score RangeClassificationList badgeFilter match
0–29OK / HealthyOKHealthy filter
30–59Watch / ConcernedWATCHFailures filter
60–100CriticalCRITICALCritical filter
11 // Drift Detection

Drift is the elapsed time since the last successful backup. It is computed in computeDrift(client) using lastSuccessTime. A client can be technically "OK" (last job succeeded) while also drifting (if that last success was 40h ago and it's normally nightly).

ThresholdLevelBadge label exampleRisk points added
< 24hOK14h ago0
24–36hWARN28h DRIFT+8
36–72hFAIL1.8d DRIFT+15
≥ 72hCRITICAL4.2d DRIFT+25
No dataCRITICALNo backup+25
12 // Daily NOC Workflow

Recommended morning triage sequence for a standard NOC shift:

1
Open console and confirm tech name
Verify your name is in the Tech Name field and click Save. This stamps every ticket note generated in the session.
2
Click ◈ Scan All
This is the only button that populates all five gauges and the failure queue. It fetches accounts and full session history in parallel. Progress modal shows live count.
3
Check the Issues gauge and NOC risk chips
If Issues > 0, click the gauge to filter to only problem clients. The ⚠ Risks chips in the NOC row show the top 4 by score — click any to jump straight to that client.
4
Work through the ⚠ Queue tab
Switch to the Queue tab. Work top-to-bottom by risk score. Click each row to open its detail panel. Review the concern banner and backup statistics accordion for root cause.
5
Add notes and copy ticket
Scroll to Additional Notes in the detail panel. Add any context observed. Click 📋 Copy ticket note (or Ctrl+Enter) to copy the formatted note to clipboard. Paste into your PSA.
6
Enable Auto-scan for ongoing monitoring
Click ▶ 5 min in the NOC row to start the auto-scan timer. The console will re-scan every 5 minutes and refresh all data in the background while you work other tickets.
13 // Ticket Note Generation

Every client check auto-generates a structured ticket note via buildTicketNote(). The note is displayed in the detail panel and copied to clipboard with one click (or Ctrl+Enter). Additional notes appended in the text area appear as a separate "Additional notes to consider" section.

Client ID: ACME Corp — DC01 Check Performed: 2026-03-22 08:14 Outcome: NOT OK Concern: CRITICAL (Repeated failures) Consecutive fails: 3 7-day fail rate: 50% (overall: 50%) Evidence: - Last Successful Backup: 2026-03-19 01:00 - Last Failure: Failed: 2026-03-22 01:00 Next steps: - Validate agent status and service health. - Confirm credentials and snapshot storage capacity. M. Krawczyk
Ticket note rebuilds when tech name is updated
If you update your name in the header and click Save while a client is already selected and a note is displayed, the note is rebuilt immediately with the new name — no need to re-run the check.
14 // Auto-Scan

Auto-scan runs a full queryAll() every 5 minutes, re-renders the client list, updates all gauges, refreshes the NOC row and failure queue. In demo mode it refreshes gauges and NOC data from cached demo clients instead of making portal calls.

▶ 5 min
Starts the timer and begins the 5:00 countdown. Button text changes to ⏹ Stop.
⏹ Stop
Cancels both the interval and countdown timer. Clears the countdown display.
Countdown display
Shows M:SS remaining until next scan. Updates every second. Resets to 5:00 after each scan fires.
Auto-scan does not update the open detail panel
The auto-scan refreshes the left panel list and gauges, but does not re-run the currently open client's detail check. To get fresh data for a specific client, click it again in the list or click the NOC risk chip.
15 // Credentials & Auth

Credentials are managed through a first-run modal. The console uses the browser Credential Management API (PasswordCredential) where available, falling back to sessionStorage. Credentials clear on tab close when using the fallback.

1
First launch — credential modal appears
Enter Portal Host (default: backup.securewebportal.net), Username, and Password for the ShadowProtect web services account. Click 🔐 Connect & Save.
2
Auth test fires
A GET /auth request is sent to the local proxy with X-Target-Host, X-Auth-User, and X-Auth-Pass headers. The proxy validates against the portal. Success clears the modal and begins the startup scan.
3
Subsequent loads — auto-connect
Saved credentials are retrieved via loadCredentials() (browser credential manager first, then sessionStorage). If found, attemptConnect() fires automatically — no modal shown.
4
Update credentials
Click ⬡ API Key in the header at any time. The stored-badge shows existing credentials. Update fields and click 🔐 Update & Reconnect. Old session is cleared and a fresh auth test runs.
🔑
Credentials are not sent to any remote server
All auth happens through the local proxy on localhost:3000. The browser only ever contacts localhost. The proxy injects credentials as HTTP headers when forwarding to the portal. Nothing leaves your local machine.
16 // Proxy Setup

The console cannot contact the StorageCraft portal directly from a browser due to CORS restrictions and cookie requirements. The local Node.js proxy (main.js) acts as a middleware: it accepts requests from the browser, injects the session cookies captured from the Basic Auth login, and forwards the request to the portal.

Starting the proxy
# In the directory containing main.js node main.js # Expected output: Proxy listening on http://localhost:3000
Proxy ModeWhen to use
localhost:3000 Default. Use when opening console.html as a local file (file://) or from a local web server. The proxy must be running before connecting.
Direct (same-origin) Use when the HTML file is served from the same host as the portal (rare). All fetch calls route directly without the proxy prefix.
Proxy mode persists per connection
The Proxy dropdown in the connection bar sets _proxyPort. This value is read on every fetch — you can switch modes mid-session and the next request will use the new setting. Default is 3000.
17 // Demo Mode

Demo mode loads a full set of mock clients with pre-configured session histories. It activates automatically when the console is opened without saved credentials, and is the default state for new installs. No proxy or portal connection is required.

ClientTypeVaultStatusScenario
ACME Corp — DC01Windows ServerVault-01CRITICAL3 consecutive fails
ACME Corp — SQL01SQL ServerVault-01OKAll healthy
Globex IndustriesFile ServerVault-01CONCERNEDIntermittent — 50% fail rate
Initech — DomainWindows ServerVault-02OKAll healthy
Contoso WebLinux VMVault-02WATCH1 recent failure, low rate
Fabrikam IncWindows ServerVault-02OKAll healthy
Northwind TradersWindows ServerVault-02ONBOARDINGNo sessions — new client
Tailspin ToysBusinessVault-03CRITICALVault-03 OFFLINE
Alpine Ski HouseWindows ServerVault-03CRITICALVault-03 OFFLINE
Coho VineyardFile ServerVault-01OKAll healthy
Litware Inc — SQLSQL ServerVault-02OKAll healthy
Adventure WorksWindows ServerVault-01WATCHLast backup 38h ago — drift
Vault-03 — Offline Scenario
Tailspin Toys and Alpine Ski House are on Vault-03, which is flagged offline in the demo data. Both clients show CRITICAL status regardless of session history, and the vault chip in the NOC row shows ⊘ OFFLINE in red.
Exiting Demo Mode
Click ✕ Connect Live in the demo banner (top bar) or click ⬡ API Key. This clears all demo data and opens the credential modal for live connection.
18 // Portal API Object

All portal interactions are routed through the api object. Each method calls portalFetch(path), which prepends the proxy base URL and adds the target host header. A 30-second abort signal is attached to every request.

MethodPortal endpoint(s)Returns
api.clients() /admin-console/admin-accounts-list {ok, clients[]} — normalised account objects
api.onboardingClients() Both accounts-list and sessions-list in parallel Accounts with no sessions in last 30 days — flagged ONBOARDING
api.queryAll() Both accounts-list and sessions-list in parallel Full merged client array with outcome, trend, last success/failure, checked timestamp
api.check(id) /admin-console/admin-sessions-list Per-client check result with snapshots array, concern, trend data
api.statusReport() /admin-console/status-report Raw HTML of the portal status report
api.eventLog() /admin-console/manage/eventlog Parsed table rows from the event log page
api.auditLog() /admin-console/manage/auditlog Parsed table rows from the audit log page
Table parser is column-name tolerant
parseAccounts() and parseSessions() try multiple column name variants ('Account Name', 'Name', 'Client', etc.) before falling back to positional indexing. If the portal updates column names, check these parser functions first.
19 // Client Data Model

Every entry in allClients (also exposed as window.allClients) follows this shape:

{ id: "acme-dc01", // stable identifier (derived from name or href) name: "ACME Corp — DC01", // display name from accounts table type: "Windows Server", // machine type vault: "Vault-01", // vault assignment (if known) vaultOffline: false, // true = vault flagged offline outcome: "NOT OK", // "OK" | "NOT OK" | "UNKNOWN" concern: "CRITICAL", // primary concern field lastConcern: "CRITICAL", // mirrors concern; read by gauges + badges lastOutcome: "NOT OK", // set after individual check runs lastChecked: "2026-03-22T08:14:00Z", // ISO timestamp of last check lastSuccess: { localTime: "2026-03-19T01:00:00Z", status: "Success" }, lastSuccessTime:"2026-03-19T01:00:00Z", // shorthand for drift/gauge calculations lastFailure: { localTime: "2026-03-22T01:00:00Z", status: "Failed" }, trend: { consecutiveFails: 3, // runs from the top of sorted sessions recentFailRatePct: 50, // % of sessions in last 7 days that failed failureRatePct: 50, // overall failure rate hoursSinceSuccess: 72 // hours since last successful backup }, snapshots: [ // last 14 sessions, newest first { localTime, status, isSuccess, size, duration, type } ] }
20 // Keyboard Shortcuts
Ctrl + Enter
Copy ticket note to clipboard
Ctrl + /
Focus and select the search field
Ctrl+Shift+Z
Toggle Zen mode (hides header + gauges)
21 // Themes
Theme keyNameAccent colorBest for
iceKrawTech DefaultBlue #2d8ff5Standard NOC use — default
redIncident RedRed #ff2d55Incident response / high-alert situations
matrixMatrix GreenGreen #00ff41After-hours monitoring, SOC aesthetic
purpleDeep SpacePurple #a78bfaLow-light environments
amberAmber IndustrialAmber #f59e0bWarm lighting environments
cobaltElectric CobaltCobalt #2563ebAlternate blue palette
monoGraphite MonoGray #e2e8f0Presentations, printing screenshots
tesla-blackTesla BlackBlue-gray #4888ccDark environments, minimal distraction

Theme selection is stored in localStorage under key spTheme and restored on every page load. To reset to default: open DevTools console and run localStorage.removeItem('spTheme'), then reload.

22 // Troubleshooting
Gauges show "Run Scan All" after Refresh
Expected behavior. The Backup Health and Issues gauges require session history data (lastConcern) which is only populated by Scan All or individual client checks. A plain Refresh only re-fetches the accounts list. Run ◈ Scan All once per session to fully populate all gauges.
Connection fails — "Authentication failed (502)"
The local proxy is not running. Start it with node main.js in the console directory. Verify localhost:3000 is selected in the Proxy dropdown. Check that no firewall is blocking port 3000.
Client list is empty after successful scan
The portal table parser may not recognize the column names in your portal version. Open DevTools → Network, inspect the /proxy?path=...admin-accounts-list response, and check the actual column headers. Update the fallback arrays in parseAccounts() to match.
All clients show ONBOARDING after scan
The sessions table parse may have failed silently (wrong column names, no rows matched). Check parseSessions() with the actual session HTML. If all sessions return with no isOk or isFail matches, every client will fall through to ONBOARDING.
Loading modal gets stuck open
If a scan throws an unhandled error, the modal has a 10-second safety timeout that force-closes it. If it stays open beyond that, check the DevTools console for uncaught exceptions in the fetch chain. The modal can also be closed manually by running document.getElementById('loadingModal').classList.add('hidden') in the DevTools console.
Tech name not persisting between sessions
Tech name is stored in localStorage under key spTechName. If localStorage is cleared or the browser is in private/incognito mode, it will not persist. Enter and Save the name at the start of each session in that case.
Request times out after 30 seconds
Each portalFetch() has a 30-second AbortSignal. Slow portal responses or network latency to backup.securewebportal.net can trigger this. If consistent, check portal uptime or increase the timeout value in the setTimeout call inside portalFetch().

SPX CONSOLE KB · KrawTech MSP Tooling · console.html · v1.0