SPX Console
Knowledge Base
v1.0 · ShadowProtect SPX
// Knowledge Base · ShadowProtect SPX
SPX Console Reference Guide
Complete operational and integration reference for the ShadowProtect SPX Production Console — a single-file HTML dashboard for MSP technicians to monitor backup health, triage failures, generate ticket notes, and drive the StorageCraft partner portal from one screen.
Format: Self-contained .html
Auth: Basic Auth via local proxy
Data source: StorageCraft admin console HTML pages
Default boot: Demo mode (no credentials needed)
01 //What Is This Tool

The ShadowProtect SPX Console is a self-contained HTML file that gives MSP technicians a real-time unified view of backup status across every client managed through the StorageCraft partner portal. It replaces manual portal browsing with a purpose-built NOC dashboard that surfaces risk, generates ticket notes, and drives consistent daily triage.

Single File — No Install
The entire application — UI, logic, themes, and data layer — lives in one .html file. Open it in any modern browser. No server, no npm, no build step required.
Portal-Scraping Data Layer
Reads from the StorageCraft admin console HTML pages — /admin-console/admin-accounts-list and /admin-console/admin-sessions-list — via a local Node.js reverse proxy. No official JSON API required.
Opens in Demo Mode by Default
Ships with 12 fully-configured mock clients across 3 vaults covering critical failures, vault-offline scenarios, onboarding, and drift. Opens directly into demo data — no credentials or proxy needed. Clicking ⬡ API Key switches to live.
What it does well
Instant bulk status across all clients with one scan
Auto-generated, copy-ready ticket notes per client
Risk score + drift detection — no configuration
8-theme UI including Zen mode for NOC screens
Vault topology awareness — detects upstream failures
Limitations to know
Requires local proxy (node main.js) for live data
Data parsed from HTML tables — column name changes may break parsers
No persistent storage — session state only, clears on tab close
Credentials use sessionStorage fallback when browser Credential API unavailable
02 //Architecture

The console is entirely browser-side. It fetches data from the StorageCraft portal through a local Node.js reverse proxy (main.js) that injects Basic Auth session cookies and strips CORS. No data is stored beyond the session.

Request flow
Browser (stack-shadow-protect-spx.html) │ └─ fetch('/proxy?path=...') + Header: X-Target-Host: backup.securewebportal.net │ ▼ localhost:3000 (node main.js) │ Injects: session cookies (secureEfolderingDotCom, EFSB) │ Strips: CORS headers │ └─► https://backup.securewebportal.net /admin-console/admin-accounts-list → HTML table → parseAccounts() /admin-console/admin-sessions-list → HTML table → parseSessions() /admin-console/status-report → raw HTML /auth → Basic Auth validation
LayerRoleLocation
UI ShellAll HTML/CSS rendering, panels, gauges, 8 themesstack-shadow-protect-spx.html
HTML ParserparsePortalTable() extracts rows from portal HTML. parseAccounts() and parseSessions() normalise column namesInline JS
api objectWraps all portal fetches: clients(), queryAll(), check(id), statusReport(), eventLog(), auditLog()Inline JS
Risk EnginecomputeRisk() and computeDrift() produce 0–100 scores from session trendsInline JS
Credential ManagerUses browser PasswordCredential API with sessionStorage fallback. Key: storagecraft-shadowprotect-spxBrowser storage
ProxyReverse-proxies requests, injects cookies, handles CORS. Auth validated via GET /authmain.js (Node.js)
03 //UI Layout

The console is a single-page app with a fixed chrome layer at the top and a two-column main area. From top to bottom and left to right:

ZoneDescription
KTC Demo BarFixed top bar (36px, #07111f background). Glowing cyan HOME button links back to the command hub. Yellow "DEMO VERSION" notice centered. Adds padding-top:36px to body.
HeaderSticky, 40px. Shield logo, app title, Tech Name input + Save, Theme swatches (8 options), Last Scan badge, Refresh / Scan All / API Key / Zen buttons.
Gauge Strip5 donut-chart gauges in a CSS grid. Each is a clickable filter. Populated by Scan All. Skeleton shimmer until data loads.
Gauge Refresh RowActive filter chip (clears on click) and Refresh Gauges button. Recalculates from cached data without an API call.
NOC Operations Row⚠ Top-risk chips | Vaults status chips | NOC filter buttons (All/Critical/Failures/Drift/Healthy) | Auto-scan (▶ 5 min).
Main AreaCSS grid: 390px 1fr. Left = Client List panel (with Clients / ⚠ Queue tabs). Right = Detail panel. Sidebar collapse button between them.
FooterDim monospace text: "StorageCraft · ShadowProtect SPX Console".
04 //Header & Controls
Tech Name
Free-text field persisted to localStorage under spTechName. Stamped at the bottom of every generated ticket note. If a ticket is already open when you save, it rebuilds immediately with the new name.
Theme Swatches
8 circular color pickers. Selection persisted under spTheme. Falls back to ice if the saved key doesn't match any swatch in the DOM.
Last Scan Badge
Shows time since the most recent check. Turns green (fresh class) when under 30 minutes. Font: JetBrains Mono, 9px, 1px letter-spacing.
↺ Refresh
Calls loadClients() — fetches accounts list and onboarding check in parallel. Updates client list without a full session scan. Use at session start or after client changes.
◈ Scan All
Calls startupScan() — fetches both accounts and full session history simultaneously via api.queryAll(), scores every client, populates all 5 gauges. Shows the loading modal with animated ring progress.
⬡ API Key
Opens the credential modal to enter or update Host / Username / Password. Pre-fills from stored credentials. Shows "Credentials found" badge when creds exist.
⊡ Zen
Hides gauges, NOC row, and header to expand the two panels to full screen. Toggle with Ctrl+Shift+Z. "← Exit Zen" button appears in the detail panel bar.
‹ Collapse
Between the two main panels. Collapses the client list sidebar to give the detail panel full width. CSS transition on grid-template-columns.
05 //Gauge Strip

Five donut-chart gauges run across the top. Each is a clickable filter — clicking a gauge filters the client list to only matching clients. The active gauge dims all others to 55% opacity. Click again to clear.

GaugeFormulaColor logic
Total DevicesCount of all allClients entries.Always blue — informational only.
Backup HealthokN / (okN + failN) — excludes ONBOARDING clients. Requires Scan All to populate.Green ≥90%, Yellow ≥70%, Red below 70%.
Backed Up <24hCount of non-onboarding clients where lastSuccessTime is within 24 hours. Only real success timestamps count — never falls back to lastChecked.Green ≥90%, Yellow ≥70%, Red below 70%.
OnboardingCount with lastConcern === 'ONBOARDING'.Always purple — no alarm threshold.
IssuesCount with CONCERNED, CRITICAL, or WATCH status (never ONBOARDING). Requires Scan All.Green = 0, Yellow = issues but no CRITICAL, Red = any CRITICAL.
Health and Issues gauges show "Run Scan All" after a plain Refresh
A plain Refresh only fetches the accounts list — it does not populate lastConcern. Backup Health and Issues need session history data from Scan All or an individual client check before they show real numbers.
06 //NOC Operations Row
SectionWhat it shows / does
⚠ RisksTop 4 highest-risk clients by computed score. Each chip is clickable — selecting one opens that client's detail panel. Populated after Scan All via renderNocRow().
VaultsOne chip per vault. Green = healthy, Yellow = degraded replication or capacity ≥90%, Red = offline. In demo mode uses DEMO_VAULTS — Vault-03 is forced offline. In live mode derives topology from client-to-vault groupings via buildTopology().
Filter5 quick-filter buttons overriding the gauge filter: All, Critical (score ≥60), Failures (score ≥30), Drift (last success ≥24h), Healthy (score <21, not onboarding).
Auto-ScanStarts a 5-minute countdown. Fires triggerAutoScan() on expiry, which calls api.queryAll() in live mode or refreshes gauges from cache in demo mode. Countdown displays as M:SS and resets to 5:00 after each fire.
07 //Client List Panel

The left panel shows all clients as rows with colored left borders indicating status. Clicking a row runs a live check (or reads from demo cache) and opens the detail panel. Tabs at the top toggle between the client list and the Failure Queue.

Search
Debounced 150ms, client-side filter on client name. Ctrl+/ focuses it.
Sort options
Name A–Z (default), Status — Critical first (uses CONCERN_ORDER map), Last Checked, Risk Score (highest first).
Left border color
Blue = OK · Yellow = WATCH · Orange/red = CONCERNED · Red = CRITICAL · Purple = ONBOARDING · Gray = Unknown. Active row highlights border to full opacity.
Risk score badge
Numeric 0–100. Green <30, Yellow 30–59, Red ≥60. Only shown when score >0.
Drift badge
Shows Xh DRIFT or X.Xd DRIFT when last success ≥24h ago and client is not ONBOARDING.
Skeleton shimmer
Gauge cards show a shimmer animation while data loads (class loading). Removed by updateGauges() on first render.
08 //Detail Panel

The right panel loads when a client is selected. It calls runCheck() which in live mode calls api.check(), and in demo mode reads directly from the cached allClients entry to avoid a "Failed to fetch" error. Three tabs render different views.

TabContents
◈ OverviewConcern banner (color-coded, pulses on CRITICAL). Backup Statistics accordion (last success, last failure, consecutive fails, 7-day rate, overall rate). Snapshot sparkbar (14 most recent, color-coded). Backup trend chart (Chart.js, 30-day bar + success-rate line). Action buttons. Ticket note block. Additional Notes textarea.
⬡ AI InsightsRule-based analysis: predicted failure risk score 0–100, estimated hours to next failure, suggested action text, and tags (Consecutive Fails, High Fail Rate, Stale Backup, Healthy, No Activity). Fully deterministic — no LLM call.
⊞ CompareShift+click clients to add to compare grid. Or use Load All Scanned to auto-populate with all scanned clients. Shows name, type, status badge, 7-day fail estimate per card.
Concern banner styles — read from source
OK
No failures detected. Backups running successfully. No action required.
WATCH
Some failures observed but not consistent. Monitor for 24–48h.
CONCERNED
Recurring failures trending upward. Verify agent, credentials, storage.
CRITICAL
Multiple consecutive failures. Immediate attention required. Banner pulses via critical-pulse keyframe animation.
ONBOARDING
No backup activity in 30 days. Verify agent installation and policy assignment.
09 //Failure Queue

The ⚠ Queue tab in the left panel surfaces every client with a non-zero risk score, sorted by risk score descending then by most recent failure time. It is a prioritized work queue for the NOC session.

Queue badge
Red number on the tab = count of CRITICAL clients (score ≥60). Hidden when no critical items exist.
Row colors
Red border-left = CRITICAL score, Yellow = WARN score, no border = informational. Score chip on right: green/yellow/red by threshold.
Clickable rows
Clicking any queue row calls runCheck() for that client — identical to clicking in the client list.
Exclusions
Clients with score = 0 and no ONBOARDING status are excluded. OK clients with score <21 and no lastFailure are also excluded to keep the queue signal-only.
10 //Walkthrough Tour

The 10-step guided tour (WT array) highlights key UI elements with a spotlight overlay and a floating card. It is triggered by first-time detection via localStorage.getItem('spTourDone'). Steps can be navigated with Next/Back, and the tour can be skipped at any step.

#TitleTarget element
01Welcome to ShadowProtect ConsoleNone (centered card)
02Step 1 — Set Your API Key#btnSetKey
03Step 2 — Enter Your Tech Name#techBar
04Step 3 — Refresh Client List#btnRefresh
05Step 4 — Check a Client in Real Time#list
06Step 5 — Review Status & Ticket Note#detailBody
07Step 6 — Add Your Own Notes#btnCopyTicket
08Step 7 — Scan All Clients at Once#btnQueryAll
09Step 8 — Filter by Gauge#gaugeStrip
10Step 9 — Refresh Clients Anytime#btnRefresh
11 //Concern Levels

Concern levels are derived by deriveConcern(data, fromBulk). The function checks for a direct concern field first, then derives from outcome, then from trend data. In bulk scan mode (fromBulk=true) it does not guess ONBOARDING from snapshot timestamps — those aren't available from the sessions list alone.

LevelTrigger conditionsPriority
OKOutcome is OK, 7-day fail rate = 0%, no consecutive failsLowest
WATCH7-day fail rate >0% and <20%, consecutive fails = 0Low
CONCERNEDConsecutive fails ≥1, OR 7-day fail rate ≥20%; outcome is NOT OKMedium
CRITICALConsecutive fails ≥3, OR 7-day fail rate ≥50%, OR vault offlineHighest
ONBOARDINGNo session activity in the last 30 days. Overrides all scoring — risk score forced to 0.Separate
UNKNOWNBulk scan with no direct concern field and insufficient data to classifyN/A
12 //Risk Scoring

computeRisk(client) runs on every client and returns a score 0–100. It drives the NOC risk chips, the failure queue sort, and the filter buttons. ONBOARDING clients are always forced to score 0 and their tags are cleared.

Vault offline
+35 pts
≥3 consecutive fails
+40 pts
1–2 consecutive fails
+20 pts
7-day fail rate ≥50%
+25 pts
7-day fail rate 20–49%
+12 pts
Drift ≥72h
+25 pts
Drift 36–72h
+15 pts
Drift 24–36h
+8 pts
ScoreClassificationBadge colorNOC filter match
0–29HealthyOKHealthy filter
30–59Watch / ConcernedCONCERNEDFailures filter
60–100CriticalCRITICALCritical filter
13 //Drift Detection

computeDrift(client) measures elapsed time since the last successful backup using lastSuccessTime. A client can be technically OK (last job succeeded) while drifting if that success was 40h ago. Drift contributes to risk score independently of the concern level.

ThresholdLevelBadge labelRisk pts
<24hOK14h ago0
24–36hWARN28h DRIFT+8
36–72hFAIL1.8d DRIFT+15
≥72hCRITICAL4.2d DRIFT+25
No dataCRITICALNo backup+25
14 //Daily NOC Workflow
1
Verify tech name
Confirm your name is in the Tech Name field and click Save. Every ticket note generated in this session will be stamped with it.
2
Click ◈ Scan All
The only button that populates all 5 gauges and the failure queue. Fetches accounts and full session history in parallel. Progress modal shows live ring animation with OK/Issues/Onboarding counts as data comes in.
3
Check the Issues gauge and NOC risk chips
If Issues > 0, click the gauge to filter to problem clients. The ⚠ Risks chips in the NOC row show the top 4 by score — click any to jump straight to that client's detail panel.
4
Work through the ⚠ Queue tab
Switch to the Queue tab in the left panel. Work top-to-bottom by risk score. Click each row to open its detail. Review the concern banner and Backup Statistics accordion for root cause context.
5
Add notes and copy ticket
Scroll to Additional Notes in the detail panel. Add context observed. Click 📋 Copy ticket note (or Ctrl+Enter) — the note includes your additional notes appended as "Additional notes to consider." Paste into your PSA.
6
Enable Auto-scan for ongoing monitoring
Click ▶ 5 min in the NOC row. The console re-scans every 5 minutes and refreshes all panels in the background while you work other tickets. Click again to stop.
15 //Ticket Note Generation

buildTicketNote(client, data) generates a structured note on every client check. It is displayed in a <pre> block and copied to clipboard by btnCopyTicket or Ctrl+Enter. If Additional Notes are present they are appended as a separate section.

Client ID: ACME Corp — DC01 Check Performed: 2026-03-22 08:14 Outcome: NOT OK Concern: CRITICAL (Repeated failures) Consecutive fails: 3 7-day fail rate: 50% (overall: 50%) Evidence: - Last Successful Backup: 2026-03-19 01:00 - Last Failure: Failed: 2026-03-22 01:00 Next steps: - Validate agent status and service health. - Confirm credentials and snapshot storage capacity. M. Krawczyk
Tech name rebuilds live
If you update the tech name while a client is already open, clicking Save calls rebuildTicketDisplay() — the note updates immediately without re-running the check.
16 //Auto-Scan

startAutoScan() runs a full api.queryAll() every 5 minutes in live mode, or refreshes gauges and NOC data from cached demo clients in demo mode. The countdown timer updates every second.

▶ 5 min
Starts interval + countdown. Button changes to "⏹ Stop". Countdown shows M:SS remaining.
⏹ Stop
Clears both _autoScanInterval and _autoScanCountdownTimer. Resets button text and clears countdown display.
Auto-scan scope
Refreshes: client list, all gauges, NOC row, failure queue. Does not re-run the currently open client's detail check — click the client again or click its NOC chip to refresh its detail.
17 //Credentials & Auth

Credentials are managed through a first-run modal. The console uses the browser Credential Management API (PasswordCredential) where available, falling back to sessionStorage with key storagecraft-shadowprotect-spx. Credentials clear on tab close when using the fallback.

1
First launch — credential modal
Enter Portal Host (default: backup.securewebportal.net), Username, and Password. Click 🔐 Connect & Save. Or click 👁 Demo Mode to skip credentials entirely.
2
Auth test fires via proxy
A GET /auth fires to the local proxy with headers X-Target-Host, X-Auth-User, X-Auth-Pass. The proxy validates against the portal. On success, _apiToken = 'session-active' is set as a sentinel and the loading modal appears for startup scan.
3
Subsequent loads — auto-connect
On load the DOMContentLoaded handler now calls activateDemoMode() directly — demo data loads immediately with no modal. To use live credentials, click ⬡ API Key and connect. If you previously saved credentials the modal pre-fills them and shows the "Credentials found" badge.
4
Update or disconnect
Click ⬡ API Key in the header at any time. To fully disconnect, click ✕ Disconnect in the connection bar — this calls clearCredentials(), removes sessionStorage entry, and calls navigator.credentials.preventSilentAccess() to require re-authentication next time.
🔑
Credentials never leave localhost
All auth happens through the local proxy at localhost:3000. The browser only ever contacts localhost. The proxy injects credentials as HTTP headers when forwarding to the portal. Nothing is sent to any remote server.
18 //Proxy Setup

The console cannot contact the StorageCraft portal directly from a browser due to CORS and cookie requirements. The local Node.js proxy (main.js) accepts requests from the browser, injects session cookies captured during Basic Auth, and forwards to the portal.

Starting the proxy
# In the directory containing main.js node main.js # Expected output: Proxy listening on http://localhost:3000
Proxy modeWhen to use
localhost:3000Default. Use when opening the HTML file locally (file://) or from a local web server. Proxy must be running before connecting.
Direct (same-origin)Use when the HTML file is served from the same host as the portal. All fetches route without the proxy prefix. Rare in practice.

The _proxyPort variable (default '3000') is read on every portalFetch() call. Switching the dropdown mid-session takes effect on the next request. Every portalFetch() has a 30-second AbortController timeout.

19 //Demo Mode

Demo mode loads instantly with no credentials or proxy. It is the default boot stateDOMContentLoaded calls activateDemoMode() directly, bypassing initCredentials() entirely. The loading modal is hidden at page start (class="hidden") so there is no blocking splash screen.

ClientVaultStatusScenario
ACME Corp — DC01Vault-01CRITICAL3 consecutive fails — score 90
ACME Corp — SQL01Vault-01OKAll healthy
Globex IndustriesVault-01CONCERNEDIntermittent — 50% fail rate
Initech — DomainVault-02OKAll healthy
Contoso WebVault-02WATCH1 recent fail, low rate
Fabrikam IncVault-02OKAll healthy
Northwind TradersVault-02ONBOARDINGNo sessions — new client
Tailspin ToysVault-03CRITICALVault-03 OFFLINE — score 100
Alpine Ski HouseVault-03CRITICALVault-03 OFFLINE — score 100
Coho VineyardVault-01OKAll healthy
Litware Inc — SQLVault-02OKAll healthy
Adventure WorksVault-01WATCHLast backup 38h ago — drift
Vault-03 Offline Scenario
Tailspin Toys and Alpine Ski House are on Vault-03, flagged offline:true in DEMO_VAULTS. Both show CRITICAL regardless of session history. The vault chip in the NOC row shows ⊘ OFFLINE in red.
Demo runCheck() bypass
When DEMO_MODE === true, runCheck() reads from the cached allClients entry instead of calling api.check(). This prevents "Failed to fetch" errors while still populating the full detail panel with realistic data.
Exiting demo mode
Click ✕ Connect Live in the demo banner, or click ⬡ API Key. This sets DEMO_MODE = false, removes body.demo-mode class, clears all client data, and re-opens the credential modal.
20 //Portal API Object

All portal interactions are routed through the api object. Each method calls portalFetch(path), which prepends the proxy URL, adds X-Target-Host, and sets a 30-second abort timeout.

MethodEndpoint(s)Returns
api.clients()/admin-console/admin-accounts-list{ok, clients[]} — normalised accounts
api.onboardingClients()Both lists in parallelAccounts with no sessions in last 30 days — flagged ONBOARDING
api.queryAll()Both lists in parallelFull merged client array with outcome, trend, last success/failure, checked timestamp
api.check(id)/admin-console/admin-sessions-listPer-client result with snapshots array, trend data — used by runCheck()
api.statusReport()/admin-console/status-reportRaw portal status HTML
api.eventLog()/admin-console/manage/eventlogParsed table rows
api.auditLog()/admin-console/manage/auditlogParsed table rows
21 //HTML Parsers

parsePortalTable(html, tableIndex) uses DOMParser to extract every row from a portal page's first (or indexed) <table>. Column names are used as object keys. Rows with all-empty cells are filtered out. Two higher-level parsers normalise the raw rows:

ParserColumn name fallback chain
parseAccounts() Name: Account NameNameClientAccountCompanyOrganization → first value
ID: IDAccount ID → href ?id= param → name-slugified
parseSessions() Account: AccountAccount NameClientMachineComputerName
Status: StatusResultStateOutcome
Date: DateTimeStart TimeStartCompletedTimestamp
Column name changes break parsing silently
If the portal updates a column header and none of the fallback names match, that field returns empty for every row. Clients with no parseable sessions will all fall through to ONBOARDING. To diagnose: open DevTools → Network, inspect the raw HTML response from the proxy, and compare actual column headers against the fallback arrays in parseAccounts() and parseSessions().
22 //Client Data Model

Every entry in allClients (also exposed as window.allClients) follows this shape. Fields in blue are required by the gauge and rendering logic.

{ id: "acme-dc01", // stable identifier name: "ACME Corp — DC01", // display name type: "Windows Server", vault: "Vault-01", vaultOffline: false, // true = vault flagged offline concern: "CRITICAL", // primary concern field lastConcern: "CRITICAL", // ← read by gauges, badges, renderList lastOutcome: "NOT OK", // set after individual check lastChecked: "2026-03-22T08:14:00Z", // ISO timestamp lastSuccess: { localTime: "2026-03-19T01:00:00Z", status: "Success" }, lastSuccessTime: "2026-03-19T01:00:00Z", // ← used by drift calc and Backed Up <24h gauge lastFailure: { localTime: "2026-03-22T01:00:00Z", status: "Failed" }, trend: { consecutiveFails: 3, // from top of sorted sessions recentFailRatePct: 50, // % failed in last 7 days failureRatePct: 50, // overall failure rate hoursSinceSuccess: 72 // hours since last success }, snapshots: [ // last 14, newest first { localTime, status, isSuccess, size, duration, type } ] }
23 //Themes
KrawTech Default
key: ice · default
Incident Red
key: red
Matrix Green
key: matrix
Deep Space Purple
key: purple
Amber Industrial
key: amber
Electric Cobalt
key: cobalt
Graphite Mono
key: mono
Tesla Black
key: tesla-black (CSS-only, no swatch)

Selection is stored in localStorage under spTheme and restored on load. tesla-black has CSS theme variables defined but no swatch in the header — it can be set manually: document.body.setAttribute('data-theme','tesla-black'). To reset to default: localStorage.removeItem('spTheme') then reload.

24 //Keyboard Shortcuts
Ctrl + Enter
Copy ticket note to clipboard
Ctrl + /
Focus and select the client search field
Ctrl+Shift+Z
Toggle Zen mode (hides header + gauges)
25 //Troubleshooting
Gauges show "Run Scan All" after Refresh
Expected behavior. Backup Health and Issues need session history data. A plain Refresh only re-fetches the accounts list. Run ◈ Scan All once per session to fully populate all gauges.
Connection fails — "Authentication failed (502)"
The local proxy is not running. Start it with node main.js. Verify localhost:3000 is selected in the Proxy dropdown in the connection bar. Check that nothing blocks port 3000.
Client list empty after successful scan
The accounts table parser didn't match column names. Open DevTools → Network, find the proxy request for admin-accounts-list, and check the actual column headers in the HTML response against the fallback chain in parseAccounts().
All clients show ONBOARDING after scan
The sessions table parse failed silently — possibly wrong column names or zero matched rows. Check parseSessions() against the actual sessions HTML. If no sessions match for any client, every client falls through to ONBOARDING.
Loading modal stuck open
A scan error triggers a 10-second safety timeout that force-closes the modal via setTimeout(...classList.add('hidden'), 10000). If it persists beyond that, check DevTools console for uncaught exceptions. Manual override: document.getElementById('loadingModal').classList.add('hidden').
Demo mode shows "TypeError: Failed to fetch" on client click
This is the fixed bug from this session. Confirm DEMO_MODE check exists at the top of runCheck() — it must short-circuit to the cached data path before reaching api.check(). If the check is missing, the fix is to add the if (DEMO_MODE) { ... } else { data = await api.check(...) } branch.
Tech name not persisting between sessions
Stored in localStorage under spTechName. Private/incognito mode clears localStorage on close. Enter and Save the name at the start of each session in that environment.
Request times out
Each portalFetch() has a 30-second AbortController timeout. Slow portal responses or high network latency to backup.securewebportal.net can trigger this. Increase the setTimeout value on the timer inside portalFetch() if persistent.

KB · ShadowProtect SPX Console · stack-shadow-protect-spx.html · v1.0