The Technician Burnout Console is a real-time operations dashboard that surfaces technician health risk before it becomes a retention problem. It calculates a composite burnout score for every technician by combining billable utilization, overtime hours, after-hours sessions, and ticket velocity data pulled from ConnectWise PSA and NinjaRMM.
The dashboard gives operations managers a single screen to identify who is at critical risk, who has available capacity to absorb redistribution, what skills gaps are creating concentration risk, and what specific actions will reduce burnout before a technician quits. It is designed for weekly manager review and monthly leadership reporting.
Before relying on any burnout score, utilization figure, or overtime count in this console, the underlying time entry discipline across the team must be verified. Two specific conditions will silently corrupt every metric the console produces.
| SCENARIO | WHAT THE CONSOLE SHOWS | WHAT IS ACTUALLY TRUE | RISK |
|---|---|---|---|
| Tech batches time at end of day, logs everything under one ticket | Low utilization, low overtime, Healthy score | Unknown โ actual load invisible | Silent burnout โ invisible until resignation |
| Admin time included in utilization calculation | 88% utilization, High or Critical score | 73% billable utilization โ within target band | False alarm โ unnecessary management intervention |
| Time logged accurately against tickets, admin excluded | Accurate burnout score | Matches actual workload | Console is trustworthy |
Each technician's burnout score is a composite of four signals. Higher scores indicate higher burnout risk. The score is not a direct sum โ it is weighted to reflect that after-hours work and overtime are stronger leading indicators than raw utilization alone.
| SIGNAL | SOURCE | WEIGHT DIRECTION | THRESHOLD NOTES |
|---|---|---|---|
| Billable Utilization % | CW Time Entries โ billable only, admin excluded | โ util โ โ score | Target band 65โ80%. Admin time entries must be filtered out before calculation โ see C2. Above 88% is red. Above 94% is critical. |
| Overtime Hours/Wk | CW Time Entries (filter=overtime) โ admin excluded | โ OT โ โ score | 0โ5h normal. 5โ10h elevated. 10h+ critical indicator. Admin overtime (e.g. late internal meetings) should also be excluded. |
| After-Hours Sessions | CW Tickets outside 08:00โ18:00 โ client tickets only | โ AH โ โ score | 0โ3 manageable. 3โ8 elevated. 8+ strong burnout signal. Internal ticket types excluded. |
| Ticket Velocity Stress | CW Service Tickets / queue depth โ client tickets only | โ queue โ โ score | Open ticket queue vs tickets closed/week ratio. Internal/admin ticket types excluded from queue count. |
Each technician card shows a trend arrow next to their name based on the direction of their 8-week utilization history. โ Up means utilization is rising โ the most urgent signal even if the current score is only High. โ Down means utilization is falling โ watch for disengagement in Low-risk technicians trending down. โ Stable means the past 3 weeks are within 2% of each other.
Seven KPI tiles run across the top of the console below the API banner. They calculate across the full technician dataset and update when the page refreshes. Color rules match the burnout scale โ red above critical thresholds, orange elevated, green healthy.
The API banner runs between the hero section and the KPI strip. It shows six data source nodes, each with a live/mock status dot, the data value currently loaded, and the exact API endpoint being called. This lets any engineer verify at a glance which data is live vs simulated.
| NODE | ENDPOINT | STATUS IN DEMO | WHAT IT PROVIDES |
|---|---|---|---|
| CW Proxy | /api/burnout/cw-psa | CONNECTED | ConnectWise PSA proxy โ routes all CW REST calls |
| Time Entries | GET /v4_6_release/apis/3.0/time/entries | MOCK | 5,841 time entries over 90 days for utilization and OT calculation |
| Tickets | GET /service/tickets | MOCK | 4,312 service tickets โ queue depth, velocity, after-hours flags |
| Members | GET /system/members | LIVE | 14 technician member records including roles and team assignments |
| NinjaRMM | GET /v2/devices ยท org counts | CONNECTED | Device counts per org โ used for workload context per technician |
| Overtime API | /time/entries?filter=overtime | ACTIVE | Filtered time entries flagged as overtime for OT calculation |
The technician grid is the primary view โ a card for every technician in the dataset. Cards are ordered by burnout score descending by default. Each card shows the risk badge, burnout score with a gradient fill bar, six stat tiles, and the first three certifications. Clicking any card opens the Deep Dive panel for that technician.
Clicking any technician card expands a full-width Deep Dive panel below the grid. The panel shows four sub-sections for the selected technician. Clicking the same card again or the ร button collapses it.
ok (normal hours), ot (overtime logged), ah (after-hours). After-hours entries show exact times (e.g. "Mon 06:55") โ these are the human signal behind the score.The heatmap section shows an 8รN grid โ 8 weeks of data (columns) for every technician (rows). Each cell is color-coded by utilization percentage. Cells with a red border indicate weeks where that technician logged overtime.
| COLOR | RANGE | MEANING |
|---|---|---|
| Red | >90% | Over-utilized โ immediate attention. Sustained red weeks are the strongest burnout predictor. |
| Orange | 75โ90% | High utilization โ above target band. Acceptable short-term, unsustainable long-term. |
| Yellow | 60โ75% | Target zone โ healthy productive utilization. |
| Green | <60% | Healthy / available capacity. These technicians can absorb redistribution. |
| Red border | OT flag | Overtime was logged in this week โ regardless of utilization color. |
Reading the heatmap across a row shows a single technician's trajectory over 8 weeks. A row that transitions from yellow to orange to red over 8 weeks (like Marcus J. in the demo) shows exactly when over-utilization began and how quickly it escalated. Reading down a column shows the team's overall load in a specific week โ useful for identifying whether overload is systemic (whole column red) or isolated to specific individuals.
Five aggregate charts appear in the Analytics section, giving a team-wide view that complements the individual technician data in the grid and heatmap.
The Skills Matrix section is a table of every technology stack in the environment vs the certifications and technicians covering it. It flags skill gaps, SPOF (single point of failure) situations, and demand vs coverage mismatches.
Three filter controls sit between the KPI strip and the technician grid. All filters apply simultaneously and update the grid in real time without a page refresh.
The left sidebar navigation scrolls to each console section: Capacity & Risk Status, Utilization Heatmap, Team Trends, and Skills Matrix. The top navigation bar provides the same section links plus a back link to the MSP Command Center index.
| SCORE | RISK LEVEL | RECOMMENDED ACTION | URGENCY |
|---|---|---|---|
| 80โ100 | CRITICAL | No new T3 assignments. Immediate ticket redistribution. 1-on-1 this sprint. Manager escalation. | This week |
| 65โ79 + โ | HIGH TRENDING UP | Treat as Critical. Trend is the key indicator โ if utilization has risen 5+ consecutive weeks, act now. | This week |
| 65โ79 โ stable | HIGH STABLE | Automation and runbook interventions. Investigate ticket type mix. Monitor weekly. | This sprint |
| 45โ64 | MEDIUM | Address root cause before escalation. Training, tooling, or load balancing appropriate here. | This month |
| 0โ44 โ stable/up | HEALTHY | Maintain. Consider for overflow redistribution from Critical/High technicians. | Quarterly review |
| 0โ44 + โ trending down | WATCH | Low burnout with falling utilization may indicate disengagement. Investigate separately. | This month |
| ENDPOINT | METHOD | DATA PROVIDED | PARAMETERS |
|---|---|---|---|
/v4_6_release/apis/3.0/time/entries | GET | All time entries โ base for utilization and overtime calculation | dateEnteredOnOrAfter (90 days back), memberIdentifier |
/time/entries?filter=overtime | GET | Overtime-flagged entries only | Same params + overtime filter |
/service/tickets | GET | Service ticket records โ queue depth, timestamps for AH detection | status=open, owner.identifier |
/system/members | GET | Technician member profiles โ name, role, team, certifications | None required for full member list |
/v2/devices | GET (NinjaRMM) | Managed device counts per organization โ workload context | Organization filter optional |
dateEntered timestamp on each ticket entry and checking whether it falls outside 08:00โ18:00 in the technician's local time zone. When wiring live data, ensure the timestamp comparison uses the correct time zone for your team โ UTC timestamps from the API must be converted before the window check.
Each technician object in the techs array uses this schema. Understanding the schema is required before wiring live API data, as the mapCWMember() function (C2) must return objects that match this shape exactly.
| FIELD | TYPE | SOURCE | DESCRIPTION |
|---|---|---|---|
id | number | CW member ID | Unique identifier. Used by toggleDive() and DOM element IDs. |
name | string | CW member | Display name shown on card and deep dive. |
role | string | CW member | Job title string. Shown on card below name. |
team | string | CW member | Team name. Must match filter dropdown values exactly: Tier 1, Tier 2, Tier 3, NOC. |
burnout | number 0โ100 | Calculated | Composite burnout score. Calculated from util, ot, ah, and queue fields. |
util | number % | CW time entries | Current week billable utilization percentage. |
ot | number | CW time entries | Overtime hours logged this week. |
ah | number | CW tickets | After-hours sessions in past 7 days. |
tickets | number | CW tickets | Tickets closed this week. |
queue | number | CW tickets | Current open ticket queue assigned to this technician. |
risk | string | Calculated | Risk level: critical, high, medium, low. Drives card border color and filter. |
trend | string | Calculated | Utilization trend: up, down, stable. Based on last 3 weeks of history. |
certs | string[] | CW member / manual | Active certification names. Shown on card and deep dive. |
expiredCerts | string[] | CW member / manual | Expired cert names shown in red with strikethrough. |
history | object[] | CW time entries | 8-week utilization history: [{week:'W1', util:72}, ...]. Powers the deep dive trend chart. |
recentTickets | object[] | CW tickets | Last 3โ4 recent tickets: {time, title, flag:'ok'|'ot'|'ah', meta}. |
recommendations | object[] | Manual / rules | Action recommendations: {icon, title, detail, impact}. Can be generated by rule logic or curated manually. |
The console runs fully in demo mode with no prerequisites. For live data, the following are required before wiring begins.
ConnectWise provides the core data for the burnout score: time entries (utilization, overtime), service tickets (queue depth, velocity, timestamps for after-hours detection), and member records (names, roles, teams).
In Azure Key Vault, add three secrets: cw-api-url (your CW instance URL, e.g. https://na.myconnectwise.net), cw-client-id, and cw-api-key (the public+private key pair encoded as publicKey+privateKey in Base64 for Basic auth).
In fn-proxy/src/functions/proxy.js, the existing proxy already forwards any path. The CW routes below will work without code changes โ just ensure the Key Vault secrets are named correctly and the Function App's API_BASE_URL setting points to your CW instance.
workType/name is classified as admin, internal, or non-client-facing in your ConnectWise instance. The exact charge code or work type name depends on how your PSA is configured โ common values are "Admin", "Internal", "Meeting", "Training", "Company". Check your CW work types under Setup Tables โ Work Types and identify which ones should be excluded. Add them to the conditions filter on every time entry query. If admin time is not excluded, utilization will be overstated and burnout scores will produce false positives.
workType.name field in the API response before setting the filter.
At the top of the script section in technician-burnout.html, find the techs array and the comment // โโ DATA โโ. Replace the static array with an async loadTechs() function that calls the proxy endpoints above and maps the results using mapCWMember() and calcBurnout().
NinjaRMM provides device count data per organization โ used for workload context in the API banner. It is secondary to the ConnectWise data and can be left in mock/connected state while CW data is wired first.
GET /v2/devices โ returns all managed devices. Group by organization ID to get per-org device counts. Add your NinjaRMM base URL and OAuth token to Key Vault as ninja-api-url and ninja-api-token.Before wiring live APIs, update the static techs array to reflect your actual team. This ensures the demo runs with realistic data for your environment and the filters (team names, role labels) match your org structure.
- 1Replace technician names and roles.Edit the
techsarray at the top of the script block. Each object needsname,role,team, and initial values forburnout,util,ot,ah,tickets,queue. - 2Confirm team names match the filter dropdown.The team filter expects exact string matches:
Tier 1,Tier 2,Tier 3,NOC. If your org uses different names (e.g. "L1 Help Desk"), either update theteamvalues in the data or update the dropdown option values in the HTML. - 3Update certifications.The
certsarray andexpiredCertsarray are strings. These also feed the Skills Matrix โ theskillAreasarray references cert names by string match. If you rename a cert, update it in both places. - 4Update the skillAreas array.The Skills Matrix is driven by the
skillAreasarray below thetechsarray. Update each entry'stechslist (technician names),coverage(cert names),openTickets, andgapflag to match your environment.
The burnout score thresholds and color rules are hardcoded in four places in the script. When wiring live data, you can tune these to match your organization's norms โ an MSP with a higher baseline utilization rate may want to shift the "high" threshold from 80% to 85%.
| THRESHOLD | CURRENT VALUE | WHERE TO CHANGE |
|---|---|---|
| Critical burnout score | 80+ | renderTechGrid() โ bColor variable ยท renderKPIs() โ risk counts ยท calcRisk() |
| High burnout score | 65+ | Same three locations as above |
| Critical utilization (card color) | 88%+ | renderTechGrid() โ tc-stat-v color on utilization tile |
| High utilization (card color) | 75%+ | Same location |
| Heatmap: over-utilized | 90%+ | renderHeatmap() โ cellColor() function |
| Heatmap: high utilization | 75โ90% | Same cellColor() function |
| Target utilization band | 65โ80% | renderCharts() โ chartUtil dataset label and threshold line |
| After-hours window | 08:00โ18:00 | calcAH() function โ adjust start/end hour constants |
Run through this checklist after wiring live APIs. Demo mode requires no verification โ the static data renders immediately on page load.
- โAPI banner nodes show correct dot colors. CW Proxy and NinjaRMM nodes should show green dots. Time Entries and Tickets nodes should show green once live โ orange means those endpoints are still using mock data.
- โKPI strip populates with real numbers. If KPI tiles show 0 or โ across the board, the
techsarray is empty โ the live data fetch failed. Check the browser console (F12) for the fetch error. - โTechnician cards render in the grid. At least one card should appear. If the grid is empty but no JS error appears, check that the team names in your live data match the filter dropdown values exactly.
- โClicking a card opens the Deep Dive. The utilization trend chart should render. If the chart area is blank, the
historyarray in the technician object is empty โ verify the 8-week history calculation in your mapper function. - โHeatmap renders with correct technician rows. If the heatmap is empty, the
techsarray has no items with populatedhistoryarrays. - โAll five analytics charts render. If any chart canvas is blank after switching to the Analytics section, the chart was destroyed and not re-initialized. Check that
renderCharts()is called on page load and not before the canvas elements exist in the DOM. - โSkills matrix gaps are accurate. If no GAP flags appear, check that cert names in the
techs[].certsarrays match the strings inskillAreas[].coverageexactly โ including spacing and capitalization. - โFilters correctly narrow the grid. Select "Critical Burnout" โ only technicians with burnout โฅ80 should appear. If all cards still show, the
riskfield in the technician data objects is not being set bycalcRisk().
| SYMPTOM | LIKELY CAUSE | FIX |
|---|---|---|
| Grid empty, no console errors | Team name mismatch in filter | Check that team field values match filter dropdown option values exactly. |
| Burnout scores all show 0 | calcBurnout() not receiving correct inputs | Log util, ot, ah, queue for one tech and verify each is a number, not undefined. |
| Deep dive chart blank | history array empty or malformed | Each history entry must be {week:'W1', util:72} โ check mapper output. |
| After-hours always 0 | Timezone not converted before window check | Convert CW UTC timestamps to local timezone before comparing against 08:00โ18:00. |
| Skills matrix no GAP flags | Cert name string mismatch | Compare techs[0].certs values against skillAreas[0].coverage values in browser console. |
| API banner all orange dots | Function proxy not reachable or returning errors | Check Function App health endpoint. Verify Key Vault references resolved (App Settings should not show raw @Microsoft.KeyVault string). |
| CORS error on API calls | Function App ALLOWED_ORIGIN not set to SharePoint URL | Update ALLOWED_ORIGIN in Function App configuration to match the SharePoint site URL. |