The Huntress Operations Console replaces the need for MSP engineers to monitor the Huntress web portal directly. It aggregates agent health, active incidents, foothold detections, ransomware events, and process anomalies into a single always-on interface built for speed. Engineers filter by detection type, severity, or client. Clicking any row opens a full investigation panel with MITRE ATT&CK mappings, host context, process trees, and context-aware one-click actions.
The console is the Huntress-specific layer of the MSP · CMD stack. It is designed as a companion to the RMM and PSA dashboards — alerts are identified here, tickets and scripts are dispatched through the PSA/RMM layer.
| Filename | stack-huntress.html |
| KB Filename | kb-dashboard-stack-huntress.html |
| Suite | MSP · CMD Design Suite — Stack Intelligence Layer |
| Dashboard Design System | Exo 2 / JetBrains Mono / Rajdhani (suite ground truth) |
| Mode at Delivery | Demo — H.DEMO flag not present; mock branch inside hFetch() |
| Live Activation | Uncomment return fetch(...) block inside hFetch() |
| Refresh Interval | 30 seconds — hStartRefreshTimer() |
| Vendor | Huntress Labs |
| API Version | Huntress REST API v1 — api.huntress.io/apidocs |
| Auth Method | HTTP Basic Auth — base64(apiKey:apiPassword) — proxy-injected |
mockSummary(), mockAlerts(), mockDevices(), mockIncidents())
until the backend proxy is deployed. The UI is fully interactive with no API keys or server required.
To go live, uncomment the three-line fetch() block inside hFetch() and
comment out the setTimeout mock block below it.| Data Source | Status | Proxy Route | Upstream Huntress API |
|---|---|---|---|
| Summary / KPI counts | Demo | /api/huntress/summary | Proxy aggregates GET /v1/agents + GET /v1/incidents |
| Alert / Incident Feed | Demo | /api/huntress/alerts | GET /v1/incidents |
| Agent / Device List | Demo | /api/huntress/devices | GET /v1/agents |
| Typed Incident Filter | Demo | /api/huntress/incidents | GET /v1/incidents?incident_type=ransomware_canary,footholds |
| Host Isolation | Proxy Needed | — | PUT /v1/agents/{id}/isolate — activates when proxy is live |
| Incident Resolve | Proxy Needed | — | PATCH /v1/incidents/{id} — activates when proxy is live |
| SOC Report Download | No API | — | No Huntress REST endpoint — portal/email only |
| Remote Script Execution | No API | — | Not in Huntress API — must use RMM platform |
| PSA Ticket Creation | No API | — | No Huntress outbound PSA API — must call PSA directly |
All runtime state lives in the global H object at the top of the script block.
The mock/live branch is controlled by the commented production block inside hFetch()
rather than a single DEMO flag.
| Function | Purpose |
|---|---|
hLoadAll() | Fires all 4 hFetch() calls via Promise.allSettled(), stores results in H.data, calls hRenderAll(). Entry point for every data cycle. |
hFetch(endpoint) | Single fetch wrapper. Production block (commented out) calls fetch() with credentials. Mock block calls the appropriate mock*() generator via setTimeout. |
hRenderAll() | Master render dispatcher. Calls hRenderRail, hRenderKpis, hRenderApiHealth, hRenderPlatCards, hRenderStream, hRenderTable, hRenderTicker in sequence. |
hStartRefreshTimer() | Resets the 30s countdown bar animation and sets H.countTimer to decrement H.countdown every second. At zero, re-fires hLoadAll() and restarts. |
hRefresh() | Manual refresh. Clears timers, calls hLoadAll() and hStartRefreshTimer(). Bound to Refresh button and Ctrl+R. |
hSelectRow(id) | Finds row by ID across alerts + incidents, sets H.selected, re-renders table highlight, renders investigation panel. |
hAct(type) | Looks up ACTION_STEPS[type] and calls hRunModal(). |
an(el, to) | Animated counter tween. Cubic ease-out, 500ms. Used by hRenderRail() for every rail counter. |
The fixed top rail contains the brand badge, six animated stat counters, a live clock, and
a manual Refresh button. All counters animate from their previous value to the new value using
the an() cubic ease-out tween on every refresh cycle. A countdown progress bar below
the rail fills from 100% to 0% over 30 seconds.
| Counter | Element ID | Source Field | Color Class |
|---|---|---|---|
| Critical | rb-crit | summary.critAlerts | rv-red |
| High | rb-high | summary.highAlerts | rv-orange |
| Medium | rb-med | summary.medAlerts | rv-blue |
| Open Inc. | rb-open | summary.openIncidents | rv-accent |
| Agents | rb-agents | summary.totalAgents — from X-Total-Count header on GET /v1/agents | rv-green |
| At Risk | rb-risk | summary.atRiskHosts | rv-orange |
#rfbar-fill uses a CSS width
transition from 100% to 0% over 30s linear. hStartRefreshTimer() resets it by toggling
transition:none briefly to snap back to 100%, then re-applies the 30s transition.A 22px marquee strip below the rail. Shows the eight most recent alert titles, hostnames, and
timestamps scrolling left at 60s/cycle. The inner HTML is duplicated so the loop is seamless.
Scroll pauses on hover. Rendered by hRenderTicker(alerts).
| Source | Top 8 items from H.data.alerts |
| Element | #ticker → .tscroll |
| Label | .tlbl — "LIVE THREATS" text in red |
| Scroll animation | tscrl — 60s linear, pause on hover via animation-play-state |
| Critical color | .tc — var(--red) |
| High/Med color | .tw — var(--orange) |
| Low/OK color | .tok — var(--green) |
Six equal-width KPI cards in a horizontal grid below the ticker. Each has a count, a label,
a contextual trend string, and a 2px top border in the card's accent color. Threshold logic
determines the trend text. Rendered by hRenderKpis(s).
| KPI Label | Source Field | Threshold | Trend Text | CSS Class |
|---|---|---|---|---|
| Critical Alerts | critAlerts | > 0 | ▲ IMMEDIATE / — CLEAR | kpi-r |
| High Alerts | highAlerts | > 4 | ▲ HIGH / — OK | kpi-o |
| Footholds | footholds | > 0 | ▲ ACTIVE / — NONE | kpi-r |
| Ransomware Det. | ransomwareDetections | > 0 | ▲ CRITICAL / — CLEAR | kpi-r |
| Open Incidents | openIncidents | > 5 | ▲ HIGH / — OK | kpi-o |
| At Risk Hosts | atRiskHosts | > 10 / > 0 | ▲ HIGH / ▲ ACT / — OK | kpi-o |
The 290px left column (#lcol) contains three stacked sub-panels inside a flex column:
API health indicators, queue filter buttons, and the incident stream scroll list.
#api-healthDemoFive service health rows with colored status dots and latency values. Rendered by
hRenderApiHealth(s). In live mode each row should reflect a real liveness probe
to the associated endpoint.
| Row | Live Check | Validity |
|---|---|---|
| Huntress Agent API | HEAD /v1/agents | ✓ Valid endpoint |
| Portal API | GET /v1/organizations (liveness proxy) | ⚠ No SOC report API — see Limitations |
| Host Mgmt API | GET /v1/agents/{id} | ✓ Valid endpoint |
| Autorun Analysis | GET /v1/autorun_entries | ✓ Valid endpoint |
| Process Insights | GET /v1/process_groups | ✓ Valid endpoint |
GET /v1/organizations.
SOC reports are portal and email only.Two rows of filter buttons. Applied client-side against H.data.alerts and
H.data.incidents — no re-fetch on filter change.
| Row | Values | State Key |
|---|---|---|
| Detection Type | All / Incidents / Footholds / Ransomware / Processes | H.filter |
| Severity | CRIT / HIGH / MED / LOW / ALL | H.sev |
#inc-streamScrollable list of up to 20 combined alerts + incidents, newest at top. Each row shows
timestamp, title, host/client, and severity badge. Clicking any row calls
hSelectRow(id) which loads the right investigation panel. Rendered by
hRenderStream(alerts). Live count shown in #stream-ct.
The flex-1 center column (#ccol) contains: the platform status card row at top,
the sortable alert grid table, and the six-button action bar at the bottom.
#prowEight flex cards across the top of the center column. Each card has a 2px top border that
pulses red (animation: cp) when status is critical. Rendered by
hRenderPlatCards(s, devices, incidents).
| Card | Source Field | CRIT Threshold | WARN Threshold |
|---|---|---|---|
| Incidents | openIncidents | > 3 | > 0 |
| Footholds | footholds | > 0 | — |
| Ransomware | ransomwareDetections | > 0 | — |
| Agents | totalAgents | — | — |
| At Risk | atRiskHosts | > 5 | > 0 |
| Isolated | Count of devices where isolated === true | — | — |
| Auto-Resolved | autoResolvedToday | — | — |
| Processes | blockedProcesses | — | > 10 |
#alert-tableSeven-column grid. Alerts + incidents merged, sorted by severity then age ascending.
Left-edge 2px stripe colored by severity. Clicking a row highlights it and loads the
investigation panel. Rendered by hRenderTable(); rows sourced from
hGetRows() which applies H.filter and H.sev.
| Column | Grid Width | Source Field | Notes |
|---|---|---|---|
| ID | 70px | r.id | HTR-XXXX (alerts) / INC-XXXX (incidents) |
| SEV | 48px | r.sev | Badge: crit / high / med / low |
| TITLE / HOST | 1fr | r.title + r.host | Icon prefix by type: 💀 🫖 🚨 |
| SOURCE | 88px | r.source | "Huntress Agent" / "Huntress SOC" |
| STATUS | 80px | r.status | OPEN / INVEST. / RESOLVED / CLOSED |
| AGE | 58px | r.age | Formatted: Xm or XhYYm |
| ASSIGNED | 84px | r.assigned | From H.engineers pool |
The 310px right column (#rcol) is populated on row click. Provides full forensic
context for the selected detection. Rendered by hRenderInvestigation(r).
| Section | Fields | Condition |
|---|---|---|
| Header | Severity badge, title, ID, source, timestamp, type/tactic/MITRE badges | Always shown |
| Detection Detail | r.detail — full narrative | If r.detail exists |
| Host & Process | Hostname, OS, PID, parent PID, command line, SHA256 | If r.host exists |
| Incident Scope | Affected host count, client name | If r.hosts exists (incidents only) |
| MITRE ATT&CK | Technique ID, technique name, tactic | If r.mitreId exists |
| Assignment | Engineer, client, age, status | Always shown |
| Event Timeline | 4–5 timestamped events from trigger to assignment | Always shown |
| Recommended Actions | Context-aware buttons by severity/type | Always shown |
| Condition | Action Added | Style |
|---|---|---|
sev === 'crit' | Isolate Host from Network | Red |
type === 'foothold' or 'ransomware' | Quarantine & Contain | Red |
| Always | Create PSA Ticket | Green |
| Always | Run Remediation Script | Green |
sev === 'crit' or 'high' | Escalate to Tier 2 | Blue |
| Always | Mark Resolved | Green |
The action modal (#modal-ov) overlays the full screen. Steps animate through
waiting → running → done states with a progress bar. Closed by the Complete button
or Escape. The slide pane (#dpane) slides in 460px from the right with
a dim overlay; also closed by Escape or clicking the dim overlay.
PUT /v1/agents/{id}/isolate — a real network-impact action. Test only
on non-production devices. Consider adding a confirmation dialog before proxy activation.api.huntress.io/apidocs. Base URL: https://api.huntress.io.
Auth: Authorization: Basic base64(apiKey:apiPassword)./api/huntress/alerts and /api/huntress/incidents. The proxy
differentiates them by applying status or incident_type query params.incident_type query param is documented and supported
X-Total-Count response header, not the body array length. Provides device
isolation status and risk context for platform cards.summary.blockedProcesses for the platform card.{"status": "resolved"}. This is the only confirmed Huntress API endpoint for
workflow state transitions on incidents. Activates when proxy is live.GET /v1/reports or equivalent
endpoint exists. The "Portal API" health row uses GET /v1/organizations as its
liveness proxy.POST /v2/device/{id}/script/runScript
POST /v4_6_release/apis/3.0/service/tickets
The Huntress REST API uses HTTP Basic Authentication. Credentials are an apiKey
and apiPassword (not a username) base64-encoded in the Authorization header.
Credentials must never appear in the frontend HTML file.
| Auth type | HTTP Basic Auth |
| Credentials format | apiKey:apiPassword — base64-encoded |
| Injection point | Backend proxy only — never the browser |
| API base URL | https://api.huntress.io |
| Rate limits | Not publicly documented — monitor for HTTP 429 |
| Pagination | ?page=1&limit=50 — total count in X-Total-Count header |
| Proxy Route | Upstream Call(s) | Proxy Responsibility |
|---|---|---|
/api/huntress/summary | GET /v1/agentsGET /v1/incidents | Aggregate: count incidents by severity, read X-Total-Count for total agents, compute at-risk host count |
/api/huntress/alerts | GET /v1/incidents?status=open,investigating&limit=50 | Pass through — normalize field names to match front-end data model |
/api/huntress/devices | GET /v1/agents?limit=50 | Pass through — map isolated flag from agent status |
/api/huntress/incidents | GET /v1/incidents?incident_type=ransomware_canary,footholds | Pass through |
GET /v1/reports
or equivalent. Reports are delivered via email and the Huntress portal only. The "Portal API"
health row uses GET /v1/organizations as its closest available liveness signal./summary,
/dashboard, or /stats endpoint. All KPI values must be computed by the
proxy by aggregating GET /v1/incidents and GET /v1/agents. The summary
proxy route makes at least two upstream calls per refresh cycle.GET /v1/agents
and GET /v1/incidents are paginated with ?page&limit params. For MSPs
managing 500+ agents the proxy should use X-Total-Count for display counts but may
need multiple pages to build complete device lists. The frontend assumes the proxy returns a flat
normalized array.429 Too Many Requests in production. The 30-second refresh
interval is intentionally conservative — increase to 60 seconds if 429s appear.H.data.events. The state object
includes an events array that is never populated by any fetch call or mock generator.
If event log data is needed in the future, the correct endpoint is
GET /v1/activity_logs (confirmed valid in Huntress API docs).fetch() block inside hFetch(). Features documented
as "activates when proxy is live" are conditional on this checklist.- Provision server-side proxy (Node.js/Express, Cloudflare Worker, or equivalent)
- Store
HUNTRESS_API_KEYandHUNTRESS_API_PASSWORDas env vars — never in the HTML file - Implement
GET /api/huntress/alerts→GET /v1/incidents?status=open,investigating&limit=50 - Implement
GET /api/huntress/devices→GET /v1/agents?limit=50 - Implement
GET /api/huntress/incidents→GET /v1/incidents?incident_type=ransomware_canary,footholds - Implement aggregation route
GET /api/huntress/summary: fire both/v1/agents+/v1/incidents, compute severity breakdown, readX-Total-Countfor agent total - Implement write route
PUT /api/huntress/agents/:id/isolate→PUT /v1/agents/{id}/isolate - Implement write route
PATCH /api/huntress/incidents/:id→PATCH /v1/incidents/{id} - Set CORS headers for the console's serving origin
- Return consistent JSON error shapes for 4xx/5xx so
Promise.allSettledbranches correctly - Test all four GET routes against a live Huntress account — verify response shapes match what the mock generators return (same field names the render functions expect)
- Open
stack-huntress.htmland locatehFetch(endpoint)in the script block - Uncomment the three-line production block:
return fetch(endpoint, {credentials:'include', headers:{'Accept':'application/json'}}) - Comment out (or remove) the
return new Promise(function(resolve){ setTimeout(...mock block below it - Reload the console — the init toast should read "Live" not "Demo"
- Verify rail counters populate with real values within 5 seconds
- Verify alert table shows real Huntress incident IDs (not
HTR-3000,INC-1100) - Test isolation on a non-production test device — confirm modal completes and isolation is reflected in the Huntress portal
- Test resolve on a test incident — confirm status changes in the Huntress portal
| Feature | Change |
|---|---|
| All counter values | Switch from randomized mock to real Huntress incident/agent counts |
| Alert table | Actual detection titles, real host names, real MITRE technique IDs |
| API health dots | Real upstream latency from liveness probes |
| Ticker | Real alert titles and Huntress-assigned timestamps scrolling live |
| Isolate Host action | Calls PUT /v1/agents/{id}/isolate — real network-level isolation |
| Resolve action | Calls PATCH /v1/incidents/{id} — changes status in Huntress portal |
| Investigation panel | Real host names, OS builds, actual process telemetry from Huntress agent |
| Auto-refresh | Pulls live data every 30 seconds instead of regenerating random mock values |
| Field / Location | Default | Purpose |
|---|---|---|
| Mock/live branch | Mock setTimeout active | Swap by commenting/uncommenting the fetch() block inside hFetch() |
| Refresh interval | 30s | Controlled by hStartRefreshTimer() — hardcoded 30 in countdown logic. Increase if rate limiting appears. |
H.engineers | 6 names | Assignment pool for mock data. Replace with real names or pull from PSA API. |
| KPI — footholds | > 0 = CRIT | In hRenderKpis(). Adjust if some footholds are an acceptable baseline. |
| KPI — open incidents | > 5 = HIGH | In hRenderKpis(). Tune to client SLA expectations. |
| Plat card — at risk | > 5 CRIT / > 0 WARN | In hRenderPlatCards(). Tune to environment baseline. |
| Ticker alert count | Top 8 | hRenderTicker() — change slice(0,8) |
| Stream max items | 20 | hRenderStream() — change slice(0,20) |
| Counter tween duration | 500ms | In an() — Math.min(1,(now-t0)/500) |
| Toast duration | 4000ms | In hToast() — setTimeout(..., 4000) |
| Layout grid | 290px 1fr 310px | CSS #body grid-template-columns. Adjust column widths here. |
| Ticker scroll speed | 60s | CSS .tscroll animation duration |
Six action buttons appear in the bottom bar of the center column. All call hAct(type)
which looks up a step definition in ACTION_STEPS and runs hRunModal().
In demo mode all steps are simulated with setTimeout. In live mode each step should
fire a real API call and resolve on completion.
Locates the agent via GET /v1/agents, then calls
PUT /v1/agents/{id}/isolate. Real network-impact action. Test only on non-production
devices before activating with proxy. No confirmation dialog currently — consider adding one.
Bulk isolation of multiple affected hosts. Same underlying call:
PUT /v1/agents/{id}/isolate per host. Activates when proxy is live.
Pulls context from GET /v1/incidents/{id}, then calls the PSA API
directly. ConnectWise Manage: POST /v4_6_release/apis/3.0/service/tickets.
Huntress has no outbound ticket API.
No Huntress escalation endpoint exists. In production: evaluate priority tier, page on-call analyst via PSA or notification service (PagerDuty/OpsGenie), attach investigation context as a ticket note.
Huntress has no script execution API. Route to NinjaRMM
POST /v2/device/{id}/script/runScript or CW Automate equivalent. The original
action step label "Deploy via Huntress agent" was corrected in the dashboard to
"Deploy via RMM (NinjaRMM / CW Automate)".
Calls PATCH /v1/incidents/{id} with body
{"status":"resolved"}. Only confirmed Huntress REST write endpoint for workflow
state. Change is reflected in the Huntress portal immediately. Requires write-scoped API key.
Cause: hLoadAll() resolved but hRenderRail() received an empty
summary object, or the promise chain threw before reaching hRenderAll().
Fix (demo): Check browser console for JS exceptions inside a mock generator function. In demo mode this should never occur.
Fix (live): Confirm the proxy is running and returning valid JSON from
/api/huntress/summary with all expected fields: critAlerts, highAlerts,
medAlerts, openIncidents, totalAgents, atRiskHosts.
Cause: One or more hFetch() calls returned a rejected promise
(Promise.allSettled catches it and triggers the warning toast).
Fix (demo): Should never appear — a mock generator threw a runtime error. Check
browser console for exceptions inside mockSummary(), mockAlerts(), etc.
Fix (live): The proxy returned a non-2xx status for at least one route. Check proxy logs, verify Huntress credentials, confirm the API key has read access to incidents and agents.
Cause: H.data.alerts and H.data.incidents are both empty
arrays, or hGetRows() is filtering everything out.
Fix: Click "ALL" severity and "All" type to confirm filters are reset. Log
H.data.alerts and H.data.incidents in the console — if both are
empty, the fetch cycle did not complete successfully. In live mode verify the proxy returns
non-empty arrays from both routes.
Cause: hSelectRow(id) could not find the clicked ID in the combined
alerts + incidents array.
Fix: Check browser console for errors after clicking a row. Confirm the row ID format
matches what is in H.data.alerts (demo: HTR-3000 style; live: Huntress UUIDs).
Ensure hRenderTable() and hRenderStream() are using the same ID field as
hSelectRow().
Cause: hStartRefreshTimer() was not called, or both
H.refreshTimer and H.countTimer were cleared without being restarted.
Fix: Confirm hLoadAll() and hStartRefreshTimer() are both
called in the DOMContentLoaded listener. Verify H.countTimer is not null
via the console. Manually trigger hRefresh() with the Refresh button to confirm the
data cycle works independently of the auto-timer.
Cause: The next() inner function inside hRunModal() threw
before firing, or ACTION_STEPS[type] returned undefined.
Fix: Check browser console for errors after triggering an action. Confirm the
type string passed to hAct() is one of: isolate,
contain, ticket, escalate, script,
report, resolve.
Cause: API credentials are invalid, expired, or not being injected by the proxy.
Fix: Verify HUNTRESS_API_KEY and HUNTRESS_API_PASSWORD are
set in the proxy environment. Confirm the proxy constructs
Authorization: Basic base64(key:password) correctly — it is key and password,
not a username. Test the credential directly:
curl -u "yourKey:yourPassword" https://api.huntress.io/v1/organizations
Cause: Write operations require an API key with write permissions. A read-only key
returns 403 Forbidden.
Fix: In the Huntress portal under Account Settings, verify the API key has write scope. Generate a new key with write access if needed. Do not use write-access keys in shared or unauthenticated environments.