SPX Console
Knowledge Base
ShadowProtect SPX · Portal Scrape Architecture
// Knowledge Base · ShadowProtect SPX
SPX Console Reference Guide
Complete operational and integration reference for the ShadowProtect SPX Production Console — including an honest assessment of which integrations are real portal scraping, which are stubs, and exactly what would need to be built to make every feature production-ready.
Data layer: HTML scrape via local Node.js proxy
No official vendor API is used
Boot: Demo mode (no credentials needed)
Action buttons: UI stubs — not wired
01 //What Is This Tool

The ShadowProtect SPX Console is a self-contained single-file HTML dashboard that gives MSP technicians a real-time view of backup health across all clients managed through the StorageCraft partner portal. It surfaces risk, generates ticket notes, and drives consistent daily triage — all from one screen with no install required.

Single File — No Install
All HTML, CSS, JS, and mock data in one .html file. Open in any modern browser. No server, no npm, no build step.
Portal Scraping — Not a Vendor API
Data comes from scraping the StorageCraft admin console HTML pages through a local Node.js reverse proxy. There is no official ShadowProtect JSON API being called. See Section 02 for the full breakdown.
Opens in Demo Mode by Default
Ships with 12 fully-configured mock clients across 3 vaults. Opens directly — no credentials or proxy needed. Click ⬡ API Key to connect live.
02 //API Honesty Assessment
Read this before going to production
This section documents exactly what is and isn't a real vendor API integration. Several features appear wired in the UI but are either HTML scrapes, speculative probes, or placeholder stubs with no backend code. This is not a criticism — it's what you need to know before deploying.
Confirmed Working — Portal HTML Scrape
These admin console HTML pages are fetched through the local proxy and their table data is parsed with parsePortalTable():

/admin-console/admin-accounts-list — client accounts table
/admin-console/admin-sessions-list — backup sessions table
/admin-console/status-report — raw status HTML
/admin-console/manage/eventlog — event log table
/admin-console/manage/auditlog — audit log table
/auth — Basic Auth validation (returns JSON {ok, reason})
/proxy?path=... — all portal traffic routes through localhost:3000
Scraping Limitation — Not a Stable API
The portal pages are parsed with fallback column name chains ('Account Name' || 'Name' || 'Client' || ...) because the actual column names may vary between portal versions or account types. If StorageCraft changes a column header, parsing silently fails — all clients fall through to ONBOARDING. This is an inherent risk of HTML scraping vs. a real versioned API.
Speculative — interopAPI/rest/ Endpoints
The probeInteropAPI() function scans 11 candidate paths under /interopAPI/rest/ — accounts, sessions, devices, status, clients, backups, vaults, events, audit, backup-status, storageusage. None of these are confirmed to exist or return useful data. They are discovery probes. The results appear in the detail panel only to help you evaluate what the portal might expose. Do not rely on any of these for production logic.
Stubs — Action Buttons Are Not Wired
These three buttons render in the detail panel but have no fetch calls behind them. They will not do anything in production:

▶ Run Backup Now — no API call, no PSA ticket, no action
⚠ Send Alert — no email, no webhook, no PSA ticket
⬡ Auto-Resolve — no action

Similarly, api.setKey() returns {ok:true} with no body — it is a placeholder method.
What Would Need to Be Built for Full Production
1. Node.js proxy (main.js) — must be running at localhost:3000. Not included in this file.
2. Action button backends — Run Backup / Send Alert / Auto-Resolve need real API endpoints (StorageCraft API, PSA webhook, or email relay).
3. Column name validation — your specific portal's column headers need to be verified against the fallback chains in parseAccounts() and parseSessions().
4. interopAPI probe results — run the probe against your live portal to discover which endpoints actually return JSON data.
03 //Architecture

The console is entirely browser-side. All portal data flows through a local Node.js reverse proxy that injects Basic Auth session cookies and handles CORS. No data is stored beyond the browser session.

Browser (stack-shadow-protect-spx.html) │ └─ fetch('/proxy?path=...') + Header: X-Target-Host: backup.securewebportal.net │ ▼ localhost:3000 (node main.js) ← must be running separately │ Injects: session cookies (secureEfolderingDotCom, EFSB) │ Strips: CORS headers │ └─► https://backup.securewebportal.net (StorageCraft portal) /admin-console/admin-accounts-list → HTML table → parseAccounts() /admin-console/admin-sessions-list → HTML table → parseSessions() /auth → JSON ← Basic Auth check /admin-console/status-report → raw HTML /admin-console/manage/eventlog → HTML table /admin-console/manage/auditlog → HTML table
LayerRole
UI ShellAll HTML/CSS rendering, 5 gauges, panels, 8 themes — in the single .html file
parsePortalTable()Uses DOMParser to extract rows from any portal HTML page's first table. Returns objects keyed by column header.
parseAccounts()Normalises account rows to {id, name, type, status} with multi-name fallback chains
parseSessions()Normalises session rows to {accountId, isOk, isFail, date, size, duration, backupType}
buildCheckResult()Computes per-client outcome, consecutive fails, 7-day fail rate, last success/failure, and snapshots array from filtered sessions
api object7 methods wrapping portalFetch(): clients, check, queryAll, onboardingClients, refresh, statusReport, eventLog, auditLog
Risk EnginecomputeRisk() + computeDrift() — 0–100 score from session trend data
Credential ManagerBrowser PasswordCredential API with sessionStorage fallback. Key: storagecraft-shadowprotect-spx
04 //UI Layout
ZoneDescription
KTC Demo BarFixed 36px top bar. Glowing cyan HOME button. Yellow "DEMO VERSION" notice. Adds padding-top:36px to body.
HeaderSticky 40px. Shield logo, app title, Tech Name input, Theme swatches (8), Last Scan badge, Refresh / Scan All / API Key / Zen buttons.
Connection BarBelow header. Host field, Username, Password, Proxy mode selector, Connect / Disconnect buttons, active user display.
Gauge Strip5 donut-chart gauges in a CSS grid. Clickable filters. Skeleton shimmer until Scan All populates them.
NOC Operations RowTop-risk chips | Vault status chips | NOC filter buttons (All/Critical/Failures/Drift/Healthy) | Auto-scan ▶ 5 min.
Main AreaCSS grid 390px 1fr. Left = Client List panel (Clients / ⚠ Queue tabs). Right = Detail panel.
05 //Header & Controls
Tech Name
Persisted to localStorage['spTechName']. Stamped at the bottom of every generated ticket note. If a client is already open when you save, the note rebuilds immediately via rebuildTicketDisplay().
Theme Swatches
8 circular pickers. Persisted to localStorage['spTheme']. Falls back to ice if saved key doesn't match any swatch.
Last Scan Badge
Time since most recent check. Turns green (fresh class) under 30 minutes.
↺ Refresh
Calls loadClients() — fetches accounts list and onboarding check in parallel. Updates client list without a full session scan.
◈ Scan All
Calls startupScan() — fetches both accounts and full session history via api.queryAll(), scores every client, populates all 5 gauges. Shows loading modal with animated ring progress.
⬡ API Key
Opens credential modal. Pre-fills from stored credentials and shows "Credentials found" badge when saved creds exist.
⊡ Zen Mode
Hides gauges, NOC row, and header. Toggle with Ctrl+Shift+Z.
06 //Gauge Strip

Five donut-chart gauges. Each is a clickable filter — clicking one filters the client list to matching clients. The active gauge dims all others to 55% opacity.

GaugeFormulaColor logic
Total DevicesCount of all allClients entriesAlways blue — informational
Backup HealthokN / (okN + failN) — excludes ONBOARDINGGreen ≥90%, Yellow ≥70%, Red below 70%
Backed Up <24hClients where lastSuccessTime is within 24h. Only real success timestamps — never falls back to lastChecked.Green ≥90%, Yellow ≥70%, Red below 70%
OnboardingCount with lastConcern === 'ONBOARDING'Always purple
IssuesCount with CONCERNED, CRITICAL, or WATCH (never ONBOARDING)Green = 0, Yellow = issues but no CRITICAL, Red = any CRITICAL
Health and Issues gauges require Scan All
A plain Refresh only fetches the accounts list — it does not populate session history or lastConcern. Backup Health and Issues gauges show "Run Scan All" until a full scan completes.
07 //NOC Operations Row
SectionWhat it shows / does
⚠ RisksTop 4 highest-risk clients by computed score. Each chip clickable — opens that client's detail. Populated after Scan All.
VaultsOne chip per vault. In demo mode uses DEMO_VAULTS. In live mode derives vault topology from client-to-vault groupings via buildTopology(). Green = healthy, Yellow = degraded/≥90% full, Red = offline.
Filter buttonsAll · Critical (score ≥60) · Failures (score ≥30) · Drift (last success ≥24h) · Healthy (score <21, not onboarding)
Auto-scan ▶5-minute countdown. Fires api.queryAll() in live mode. In demo mode refreshes gauges from cache only.
08 //Client List Panel
Search
Debounced 150ms, client-side filter on name. Ctrl+/ focuses it.
Sort options
Name A–Z · Status — Critical first · Last Checked · Risk Score (highest first)
Left border color
Blue = OK · Yellow = WATCH · Red/orange = CONCERNED · Red = CRITICAL · Purple = ONBOARDING · Gray = Unknown
Risk score badge
0–100. Green <30, Yellow 30–59, Red ≥60. Only shown when score >0.
Drift badge
Shows Xh DRIFT or X.Xd DRIFT when last success ≥24h ago and not ONBOARDING.
09 //Detail Panel

Loads when a client is selected. Calls runCheck() which in demo mode reads from the cached allClients entry directly (avoiding "Failed to fetch"). In live mode calls api.check(client.id).

TabContents
◈ OverviewConcern banner (pulses on CRITICAL). Backup Statistics accordion (last success, last failure, consecutive fails, 7-day rate). Chart.js bar+line trend chart. Snapshot sparkbar (14 most recent). Action buttons (stubs — see Section 23). Ticket note block. Additional Notes textarea.
⬡ AI InsightsRule-based (no LLM): predicted failure risk 0–100, estimated hours to next failure, action suggestion text, and tags. Fully deterministic — computed from consecutiveFails and recentFailRatePct.
⊞ CompareShift+click clients to add to compare grid. Or use Load All Scanned to auto-populate. Shows name, type, status badge, 7-day fail estimate per card.
10 //Failure Queue

The ⚠ Queue tab in the left panel shows every client with a non-zero risk score, sorted by risk score descending then by most recent failure time. The tab badge shows the count of CRITICAL clients (score ≥60). Clicking any queue row calls runCheck() for that client. OK clients with score <21 and no lastFailure are excluded to keep the queue signal-only.

11 //Concern Levels

Derived by deriveConcern(data, fromBulk). Checks for a direct concern field first, then derives from outcome, then from trend data. In bulk scan mode (fromBulk=true) it does not guess ONBOARDING from snapshot timestamps alone.

LevelTrigger conditions
OKOutcome OK, 7-day fail rate = 0%, no consecutive fails
WATCH7-day fail rate >0% and <20%, consecutive fails = 0
CONCERNEDConsecutive fails ≥1, OR 7-day fail rate ≥20%
CRITICALConsecutive fails ≥3, OR 7-day fail rate ≥50%, OR vault offline
ONBOARDINGNo session activity in last 30 days. Risk score forced to 0.
UNKNOWNBulk scan with no direct concern field and insufficient data
12 //Risk Scoring

computeRisk(client) returns a score 0–100. Drives NOC risk chips, failure queue sort, and filter buttons. ONBOARDING clients always score 0.

Vault offline
+35 pts
≥3 consecutive fails
+40 pts
1–2 consecutive fails
+20 pts
7-day fail rate ≥50%
+25 pts
7-day fail rate 20–49%
+12 pts
Drift ≥72h
+25 pts
Drift 36–72h
+15 pts
Drift 24–36h
+8 pts
ScoreClassificationNOC filter match
0–29HealthyHealthy filter
30–59ConcernedFailures filter
60–100CriticalCritical filter
13 //Drift Detection

computeDrift(client) measures hours since lastSuccessTime. A client can be technically OK (last job succeeded) while drifting if that success was 40h ago.

ThresholdLevelBadge labelRisk pts
<24hOK14h ago0
24–36hWARN28h DRIFT+8
36–72hFAIL1.8d DRIFT+15
≥72hCRITICAL4.2d DRIFT+25
No dataCRITICALNo backup+25
14 //Daily NOC Workflow
1
Verify tech name
Confirm your name is in the Tech Name field and click Save. Every ticket note this session will be stamped with it.
2
Click ◈ Scan All
The only button that populates all 5 gauges and the failure queue. Fetches accounts and full session history in parallel. Progress modal shows live ring animation.
3
Check Issues gauge and NOC risk chips
If Issues >0, click the gauge to filter to problem clients. The ⚠ Risks chips show the top 4 by score — click any to jump to that client's detail.
4
Work through the ⚠ Queue tab
Switch to the Queue tab. Work top-to-bottom by risk score. Click each row to open its detail. Review the concern banner and Backup Statistics accordion for root cause.
5
Add notes and copy ticket
Add context in Additional Notes. Click 📋 Copy ticket note (or Ctrl+Enter). Notes are appended as "Additional notes to consider." Paste into your PSA.
6
Enable Auto-scan for ongoing monitoring
Click ▶ 5 min in the NOC row. Console re-scans every 5 minutes in the background. Click again to stop.
15 //Ticket Note Generation

buildTicketNote(client, data) generates a structured note on every client check. Displayed in a <pre> block and copied to clipboard by Ctrl+Enter or clicking the copy button. Additional Notes are appended on copy if present.

Client ID: ACME Corp — DC01 Check Performed: 2026-03-22 08:14 Outcome: NOT OK Concern: CRITICAL (Repeated failures) Consecutive fails: 3 7-day fail rate: 50% Evidence: - Last Successful Backup: 2026-03-19 01:00 - Last Failure: Failed: 2026-03-22 01:00 Next steps: - Validate agent status and service health. - Confirm credentials and snapshot storage capacity. M. Krawczyk
16 //Auto-Scan

startAutoScan() runs a full api.queryAll() every 5 minutes in live mode, or refreshes gauges and NOC data from cached demo clients in demo mode. Duration is calculated at 300 seconds per cycle.

▶ 5 min
Starts interval + countdown. Button changes to "⏹ Stop". Countdown shows M:SS remaining.
Scope
Refreshes client list, all gauges, NOC row, failure queue. Does not re-run the currently open client's detail check.
Demo mode
Calls renderNocRow(), renderFailureQueue(), and updateGauges() from cache — no portal fetch.
17 //Credentials & Auth

Credentials use the browser Credential Management API (PasswordCredential) with sessionStorage fallback (key: storagecraft-shadowprotect-spx). Credentials clear on tab close when using the fallback.

1
First launch
Enter Portal Host (default: backup.securewebportal.net), Username, Password. Click 🔐 Connect & Save. Or click 👁 Demo Mode to skip credentials entirely. DOMContentLoaded calls activateDemoMode() directly — demo loads with no modal.
2
Auth test
GET /auth fires to the local proxy with headers X-Target-Host, X-Auth-User, X-Auth-Pass. Proxy validates against portal. On success, _apiToken = 'session-active' is set as a sentinel — the session cookie is the real auth mechanism.
3
Update or disconnect
Click ⬡ API Key at any time. To fully disconnect, click ✕ Disconnect — calls clearCredentials(), removes sessionStorage entry, calls navigator.credentials.preventSilentAccess().
🔑
Credentials never leave localhost
All auth happens through the local proxy at localhost:3000. The browser only ever contacts localhost. The proxy injects credentials as HTTP headers when forwarding to the portal.
18 //Proxy Setup

The console cannot contact the StorageCraft portal directly due to CORS and cookie requirements. The local Node.js proxy (main.js) accepts requests from the browser, injects session cookies captured during Basic Auth, and forwards to the portal.

Starting the proxy
# In the directory containing main.js node main.js # Expected: Proxy listening on http://localhost:3000
Proxy modeWhen to use
localhost:3000Default. Use when opening the HTML file locally (file://) or from a local web server. Proxy must be running before connecting.
Direct (same-origin)Use when the HTML is served from the same host as the portal. Rare in practice.

Every portalFetch() call has a 30-second AbortController timeout. The proxy port variable _proxyPort is read on every request — switching the dropdown mid-session takes effect on the next call.

19 //Demo Mode

Demo mode is the default boot state. DOMContentLoaded calls activateDemoMode() directly — no blocking modal, no credentials needed. The loading modal starts class="hidden".

ClientVaultStatusScenario
ACME Corp — DC01Vault-01CRITICAL3 consecutive fails
ACME Corp — SQL01Vault-01OKAll healthy
Globex IndustriesVault-01CONCERNEDIntermittent — 50% fail rate
Initech — DomainVault-02OKAll healthy
Contoso WebVault-02WATCH1 recent fail
Fabrikam IncVault-02OKAll healthy
Northwind TradersVault-02ONBOARDINGNo sessions
Tailspin ToysVault-03CRITICALVault-03 OFFLINE
Alpine Ski HouseVault-03CRITICALVault-03 OFFLINE
Coho VineyardVault-01OKAll healthy
Litware Inc — SQLVault-02OKAll healthy
Adventure WorksVault-01WATCHLast backup 38h ago — drift
Demo runCheck() bypass
When DEMO_MODE === true, runCheck() reads from the cached allClients entry instead of calling api.check(). This prevents "Failed to fetch" errors while still populating the full detail panel.
20 //Portal API Object

All portal interactions route through the api object. Every method calls portalFetch(path) which prepends the proxy URL, adds X-Target-Host, and sets a 30s abort timeout.

MethodEndpoint(s)Returns
api.clients()/admin-console/admin-accounts-list{ok, clients[]} — normalised accounts
api.onboardingClients()Both lists in parallelAccounts with no sessions in last 30 days
api.queryAll()Both lists in parallelFull merged client array with outcome, trend, timestamps
api.check(id)/admin-console/admin-sessions-listPer-client result with snapshots, trend data
api.statusReport()/admin-console/status-reportRaw portal status HTML
api.eventLog()/admin-console/manage/eventlogParsed table rows
api.auditLog()/admin-console/manage/auditlogParsed table rows
api.setKey()NoneSTUB Returns {ok:true} — no action
21 //HTML Parsers

parsePortalTable(html, tableIndex) uses DOMParser to extract rows from the first (or indexed) table in any portal HTML page. Column names are used as object keys. Two higher-level parsers normalise the raw rows:

ParserColumn name fallback chain
parseAccounts()Name: Account NameNameClientAccountCompanyOrganization → first value
ID: IDAccount ID → href ?id= param → name-slugified
parseSessions()Account: AccountAccount NameClientMachineComputerName
Status: StatusResultStateOutcome
Date: DateTimeStart TimeStartCompletedTimestamp
Column name changes break parsing silently
If none of the fallback names match your portal's actual column headers, that field returns empty for every row. Clients with no parseable sessions fall through to ONBOARDING. To diagnose: open DevTools → Network, inspect the raw HTML response from the proxy, and compare actual column headers against the fallback chains.
22 //Client Data Model

Every entry in allClients (also exposed as window.allClients) follows this shape. Fields in blue are required by gauge and rendering logic.

{ id: "acme-dc01", name: "ACME Corp — DC01", type: "Windows Server", vault: "Vault-01", vaultOffline: false, concern: "CRITICAL", lastConcern: "CRITICAL", // ← read by gauges, badges, renderList lastOutcome: "NOT OK", lastChecked: "2026-03-22T08:14:00Z", lastSuccess: { localTime: "2026-03-19T01:00:00Z", status: "Success" }, lastSuccessTime: "2026-03-19T01:00:00Z", // ← drift calc + Backed Up <24h gauge lastFailure: { localTime: "2026-03-22T01:00:00Z", status: "Failed" }, trend: { consecutiveFails: 3, recentFailRatePct: 50, // 7-day window failureRatePct: 50, // overall hoursSinceSuccess: 72 }, snapshots: [ // last 14, newest first { localTime, status, isSuccess, size, duration, type } ] }
23 //Stub Action Buttons
⊗ These buttons render in the detail panel but have no backend integration — they will not do anything in production.
ButtonCurrent stateWhat would need to be built
▶ Run Backup NowSTUB Renders in actions-bar. No onclick handler beyond rendering.A real StorageCraft API call to trigger an on-demand backup job, or an RMM automation script via your PSA/RMM API.
⚠ Send AlertSTUB Renders in actions-bar. No onclick handler.A POST to a webhook (PSA, Teams, Slack, PagerDuty) or an email relay endpoint. The ticket note text is already available in lastTicketNote.
⬡ Auto-ResolveSTUB Disabled when client is OK, enabled otherwise — but no action on click.Depends on what "resolve" means in your workflow — could close a PSA ticket, acknowledge an alert in your monitoring platform, or trigger a remediation script.
The ticket note text is already available when the button fires
When a client is open, lastTicketNote is always populated. Any button handler you wire can use it directly — e.g. POST it as the body of a PSA ticket creation webhook without any additional formatting work.
24 //interopAPI/rest/ Probe

probeInteropAPI() fires 11 parallel requests to candidate paths under /interopAPI/rest/. Results appear in the detail panel. This is a discovery tool, not a production data source. No current logic in the dashboard reads from these endpoints for rendering or scoring.

EndpointStatus shown
/interopAPI/rest/accountsEach returns one of: JSON ✓ (200 + JSON body), HTML 200 (200 but HTML, not JSON), a numeric HTTP status, or ERR (timeout/network error). A 4s timeout applies per request.
/interopAPI/rest/sessions
/interopAPI/rest/devices
/interopAPI/rest/status
/interopAPI/rest/clients
/interopAPI/rest/backups
/interopAPI/rest/vaults
/interopAPI/rest/events
/interopAPI/rest/audit
/interopAPI/rest/backup-status
/interopAPI/rest/storageusage
Do not build production logic against these endpoints yet
If any of these return JSON ✓ against your live portal, that is genuinely useful information — it means a real API surface may exist. But the shape of that JSON is unknown and not validated by this tool. Use the probe results as a starting point for manual exploration, not as a confirmed API contract.
25 //Themes
KrawTech Default
key: ice · default
Incident Red
key: red
Matrix Green
key: matrix
Deep Space Purple
key: purple
Amber Industrial
key: amber
Electric Cobalt
key: cobalt
Graphite Mono
key: mono

Selection persisted to localStorage['spTheme']. Falls back to ice if saved key doesn't match any swatch in the DOM. To reset: localStorage.removeItem('spTheme') then reload.

26 //Keyboard Shortcuts
Ctrl + Enter
Copy ticket note to clipboard
Ctrl + /
Focus and select the client search field
Ctrl+Shift+Z
Toggle Zen mode (hides header + gauges)
27 //Troubleshooting
Connection fails — "Authentication failed (502)"
The local proxy is not running. Start it with node main.js. Verify localhost:3000 is selected in the Proxy dropdown. Check nothing blocks port 3000.
All clients show ONBOARDING after scan
The sessions table parse failed silently — column names in your portal don't match any of the fallback chains in parseSessions(). Open DevTools → Network, find the proxy request for admin-sessions-list, and compare actual column headers against the fallback arrays in the code.
Client list empty after successful scan
Same root cause but in parseAccounts(). Check the admin-accounts-list response HTML for actual column names and add them to the fallback chains.
Gauges show "Run Scan All" after Refresh
Expected. A plain Refresh only fetches the accounts list — no session history. Backup Health and Issues gauges need session data. Run ◈ Scan All once per session.
Action buttons do nothing in production
Run Backup Now, Send Alert, and Auto-Resolve are UI stubs with no backend wiring. See Section 23 for what each would need to be built. This is expected behavior, not a bug.
Loading modal stuck open
A scan error triggers a 10-second safety timeout that force-closes the modal. If it persists beyond that, check DevTools console for uncaught exceptions. Manual override: document.getElementById('loadingModal').classList.add('hidden').
Tech name not persisting between sessions
Stored in localStorage['spTechName']. Private/incognito mode clears localStorage on close. Enter and Save the name at the start of each session in that environment.
interopAPI probe shows ERR for all endpoints
Expected if those endpoints don't exist on your portal. This is a discovery scan — ERR on all is a valid result meaning the portal doesn't expose these paths. It does not affect any dashboard functionality.

KB · ShadowProtect SPX Console · stack-shadow-protect-spx.html · v1.0