HOME โ€บ OPERATIONS โ€บ TECHNICIAN BURNOUT CONSOLE โ€” KB
technician-burnout.html ยท ConnectWise + NinjaRMM
KNOWLEDGE BASE // OPERATIONS // TECHNICIAN BURNOUT CONSOLE
Technician Burnout Console
Complete reference for using, configuring, and integrating the MSP Technician Burnout Console. Covers the burnout scoring model, all five dashboard sections, ConnectWise PSA and NinjaRMM API integration, and the full build guide in a single document.
OPERATIONS BURNOUT MONITORING CONNECTWISE PSA NINJARMM RETENTION TOOL
FILE
technician-burnout.html
PRIMARY SOURCE
ConnectWise PSA ยท NinjaRMM
KEY METRIC
Burnout Score 0โ€“100 composite
SECTIONS
5 โ€” KPI ยท Grid ยท Heatmap ยท Analytics ยท Skills
DATA RANGE
90 days time entries ยท 8-week utilization
01
WHAT THIS TOOL DOES

The Technician Burnout Console is a real-time operations dashboard that surfaces technician health risk before it becomes a retention problem. It calculates a composite burnout score for every technician by combining billable utilization, overtime hours, after-hours sessions, and ticket velocity data pulled from ConnectWise PSA and NinjaRMM.

The dashboard gives operations managers a single screen to identify who is at critical risk, who has available capacity to absorb redistribution, what skills gaps are creating concentration risk, and what specific actions will reduce burnout before a technician quits. It is designed for weekly manager review and monthly leadership reporting.

WHY THIS MATTERS The two most common causes of MSP technician burnout are invisible until they become resignations: chronic over-utilization on a small number of senior engineers while junior capacity goes underused, and skill concentration that forces specific individuals to own every ticket in their domain. This console makes both visible in real time.
DATA QUALITY PREREQUISITES โ€” THIS CONSOLE IS ONLY AS ACCURATE AS THE TIME ENTRIES BEHIND IT

Before relying on any burnout score, utilization figure, or overtime count in this console, the underlying time entry discipline across the team must be verified. Two specific conditions will silently corrupt every metric the console produces.

PREREQUISITE 1 โ€” TECHS MUST LOG TIME AGAINST TICKETS THROUGHOUT THE DAY The burnout score is calculated entirely from ConnectWise time entries. If a technician batches their time at the end of the day, guesses at hours, or logs everything under a single ticket instead of the actual tickets worked, the utilization calculation is meaningless. A tech who worked 11 hours across 9 tickets but logged it all as "miscellaneous support" will appear to have zero overtime and low velocity stress. The console cannot detect this โ€” it will silently report a Healthy score for someone who is burning out. Time entry accuracy is a non-negotiable prerequisite for this tool to be trustworthy. It should be established as a team expectation and verified before the console is used for any management decisions.
PREREQUISITE 2 โ€” ADMIN TIME MUST BE EXCLUDED FROM ALL CALCULATIONS ConnectWise time entries include non-client-facing work: internal meetings, training sessions, HR tasks, vendor calls, company admin, and other overhead logged by technicians throughout the week. If admin time is included in the utilization calculation, the burnout score is artificially inflated. A technician at 88% utilization who has 15% of their time logged as admin is actually at 73% billable โ€” the difference between a High risk score and a healthy one. All API queries pulling time entries for this console must filter out entries where the work type or charge code is flagged as admin or internal. The specific filter depends on how your ConnectWise instance classifies admin time โ€” see C2 for the API filter pattern.
SCENARIOWHAT THE CONSOLE SHOWSWHAT IS ACTUALLY TRUERISK
Tech batches time at end of day, logs everything under one ticket Low utilization, low overtime, Healthy score Unknown โ€” actual load invisible Silent burnout โ€” invisible until resignation
Admin time included in utilization calculation 88% utilization, High or Critical score 73% billable utilization โ€” within target band False alarm โ€” unnecessary management intervention
Time logged accurately against tickets, admin excluded Accurate burnout score Matches actual workload Console is trustworthy
02
DATA MODEL & SCORING
BURNOUT SCORE โ€” COMPOSITE 0โ€“100

Each technician's burnout score is a composite of four signals. Higher scores indicate higher burnout risk. The score is not a direct sum โ€” it is weighted to reflect that after-hours work and overtime are stronger leading indicators than raw utilization alone.

0โ€“44 ยท HEALTHY
45โ€“64 ยท MEDIUM RISK
65โ€“79 ยท HIGH RISK
80โ€“100 ยท CRITICAL
SIGNALSOURCEWEIGHT DIRECTIONTHRESHOLD NOTES
Billable Utilization %CW Time Entries โ€” billable only, admin excludedโ†‘ util โ†’ โ†‘ scoreTarget band 65โ€“80%. Admin time entries must be filtered out before calculation โ€” see C2. Above 88% is red. Above 94% is critical.
Overtime Hours/WkCW Time Entries (filter=overtime) โ€” admin excludedโ†‘ OT โ†’ โ†‘ score0โ€“5h normal. 5โ€“10h elevated. 10h+ critical indicator. Admin overtime (e.g. late internal meetings) should also be excluded.
After-Hours SessionsCW Tickets outside 08:00โ€“18:00 โ€” client tickets onlyโ†‘ AH โ†’ โ†‘ score0โ€“3 manageable. 3โ€“8 elevated. 8+ strong burnout signal. Internal ticket types excluded.
Ticket Velocity StressCW Service Tickets / queue depth โ€” client tickets onlyโ†‘ queue โ†’ โ†‘ scoreOpen ticket queue vs tickets closed/week ratio. Internal/admin ticket types excluded from queue count.
RISK LEVELS
CRITICAL 80โ€“100
Immediate manager action required. No new assignments. 1-on-1 check-in this sprint. Begin ticket redistribution immediately. Turnover risk is high.
HIGH 65โ€“79
Close monitoring required. Trend direction matters โ€” a trending-up High is more urgent than a stable High. Automation and runbook interventions appropriate here.
MEDIUM 45โ€“64
Watch and address underlying drivers. Check if utilization trend is upward. Automation opportunities can prevent escalation to High.
HEALTHY 0โ€“44
Low burnout risk. Check for disengagement if utilization is trending down significantly. These technicians often have capacity to absorb redistribution from Critical/High.
TREND INDICATOR

Each technician card shows a trend arrow next to their name based on the direction of their 8-week utilization history. โ†‘ Up means utilization is rising โ€” the most urgent signal even if the current score is only High. โ†“ Down means utilization is falling โ€” watch for disengagement in Low-risk technicians trending down. โ†’ Stable means the past 3 weeks are within 2% of each other.

03
KPI STRIP

Seven KPI tiles run across the top of the console below the API banner. They calculate across the full technician dataset and update when the page refreshes. Color rules match the burnout scale โ€” red above critical thresholds, orange elevated, green healthy.

Avg Burnout Score
Team average burnout score out of 100. Above 70 shows the warning "โš  Team at risk." This is the headline number for leadership reporting โ€” if this is above 70 the team as a whole needs intervention, not just individual engineers.
Critical Risk
Count of technicians in the Critical (80+) band. Always shown red. Any value above 0 should trigger a manager review this week.
High Risk
Count of technicians in the High (65โ€“79) band. Orange. Monitor weekly. Priority action if any are also trending up.
Team Utilization
Average billable utilization % across all technicians. Target band is 65โ€“80%. Above 85% is over-target and shown orange. Above 88% is critical red. Below 65% may indicate underutilization or data gaps.
Overtime Hrs/Wk
Total overtime hours logged across the full team in the current week from the ConnectWise time entries endpoint filtered by overtime flag.
After-Hours
Total after-hours ticket sessions across the team in the past 7 days. Tickets opened or worked outside 08:00โ€“18:00 based on CW ticket timestamps.
Open Ticket Queue
Sum of each technician's current open ticket queue. This is not total open tickets in the system โ€” it is the subset assigned to and owned by the technicians in the console's dataset.
04
API STATUS BANNER

The API banner runs between the hero section and the KPI strip. It shows six data source nodes, each with a live/mock status dot, the data value currently loaded, and the exact API endpoint being called. This lets any engineer verify at a glance which data is live vs simulated.

NODEENDPOINTSTATUS IN DEMOWHAT IT PROVIDES
CW Proxy/api/burnout/cw-psaCONNECTEDConnectWise PSA proxy โ€” routes all CW REST calls
Time EntriesGET /v4_6_release/apis/3.0/time/entriesMOCK5,841 time entries over 90 days for utilization and OT calculation
TicketsGET /service/ticketsMOCK4,312 service tickets โ€” queue depth, velocity, after-hours flags
MembersGET /system/membersLIVE14 technician member records including roles and team assignments
NinjaRMMGET /v2/devices ยท org countsCONNECTEDDevice counts per org โ€” used for workload context per technician
Overtime API/time/entries?filter=overtimeACTIVEFiltered time entries flagged as overtime for OT calculation
DOT COLORS Green dot = live connection returning real data. Orange dot = mock/simulated data. Yellow dot = active with partial data. Cyan dot = connected and authenticated. In production all dots should be green or cyan โ€” any orange dot means that data source is not yet wired and the console is using seed data for that metric.
05
TECHNICIAN GRID

The technician grid is the primary view โ€” a card for every technician in the dataset. Cards are ordered by burnout score descending by default. Each card shows the risk badge, burnout score with a gradient fill bar, six stat tiles, and the first three certifications. Clicking any card opens the Deep Dive panel for that technician.

CARD ELEMENTS
Risk badge
CRITICAL HIGH MEDIUM LOW โ€” top-right of every card. Card border color matches: red for critical, orange for high, yellow for medium, green for low.
Burnout score bar
Gradient fill bar from 0โ€“100. Red gradient for critical, orange for high, yellow-green for medium, solid green for healthy. The bar color gives an immediate visual read without needing to read the number.
Utilization %
Current week billable utilization. Above 88% shown red. 75โ€“88% orange. Below 75% green.
Overtime
Overtime hours logged this week. Above 10h red. 5โ€“10h orange. Below 5h neutral.
After-Hours
Count of after-hours ticket sessions in the past 7 days. Above 8 red. 3โ€“8 orange. Below 3 neutral.
Open Tickets
Current open ticket queue assigned to this technician. Above 10 red. 6โ€“10 orange. Below 6 accent (normal).
Closed/Wk
Tickets closed this week. Shows velocity โ€” a high queue with low closed/week indicates backlog accumulation.
Certs
Count of active certifications. Expired certs shown with strikethrough styling in cyan. If more than 3 certs, a +N badge appears.
06
DEEP DIVE PANEL

Clicking any technician card expands a full-width Deep Dive panel below the grid. The panel shows four sub-sections for the selected technician. Clicking the same card again or the ร— button collapses it.

Utilization Trend Chart
An 8-week line chart of billable utilization % with a red 80% threshold line. This is the most important single view for early warning โ€” a technician whose score is only "High" but whose chart shows 8 consecutive weeks of increase is more urgent than a stable Critical engineer.
Recent Activity Log
The last 3โ€“4 recent tickets with timestamps, title, client, and a flag: ok (normal hours), ot (overtime logged), ah (after-hours). After-hours entries show exact times (e.g. "Mon 06:55") โ€” these are the human signal behind the score.
Certifications
Full list of active and expired certifications. Expired certs are shown in red โ€” these create immediate skills gap risk on the tickets they cover. The Deep Dive shows all certs, not just the first three shown on the card.
Recommendations
2โ€“3 system-generated action recommendations with icon, title, detail, and estimated impact. These are pre-defined in the technician data object. Examples: "Redistribute 4 tickets to Tier 2 โ€” saves ~6h/week", "Automate repetitive backup alerts โ€” 22% of tickets are backup-status noise", "Promote to T3 rotation โ€” career growth + retention".
USING RECOMMENDATIONS IN 1-ON-1S The recommendations in the Deep Dive are designed to be copy-pasteable into a manager's sprint notes or 1-on-1 agenda. Each one has a concrete action, the supporting data detail, and the estimated impact. A manager reviewing a Critical technician's Deep Dive should leave with a list of 2โ€“3 specific actions for the next sprint โ€” not just a score.
07
UTILIZATION HEATMAP

The heatmap section shows an 8ร—N grid โ€” 8 weeks of data (columns) for every technician (rows). Each cell is color-coded by utilization percentage. Cells with a red border indicate weeks where that technician logged overtime.

COLORRANGEMEANING
Red>90%Over-utilized โ€” immediate attention. Sustained red weeks are the strongest burnout predictor.
Orange75โ€“90%High utilization โ€” above target band. Acceptable short-term, unsustainable long-term.
Yellow60โ€“75%Target zone โ€” healthy productive utilization.
Green<60%Healthy / available capacity. These technicians can absorb redistribution.
Red borderOT flagOvertime was logged in this week โ€” regardless of utilization color.

Reading the heatmap across a row shows a single technician's trajectory over 8 weeks. A row that transitions from yellow to orange to red over 8 weeks (like Marcus J. in the demo) shows exactly when over-utilization began and how quickly it escalated. Reading down a column shows the team's overall load in a specific week โ€” useful for identifying whether overload is systemic (whole column red) or isolated to specific individuals.

08
ANALYTICS CHARTS

Five aggregate charts appear in the Analytics section, giving a team-wide view that complements the individual technician data in the grid and heatmap.

Burnout Score Distribution
Bar chart showing how many technicians fall in each burnout score bucket (0โ€“20, 20โ€“40, etc). Use this to answer "is our problem concentrated in one or two people, or is it a systemic team issue?"
Overtime Hours ยท 12 Weeks
Line chart of total team overtime hours per week over the rolling 12 weeks. Consistent upward trend is the most actionable aggregate signal โ€” it means the team's workload is structurally exceeding capacity, not just temporarily.
Billable Utilization by Tier
Grouped bar chart showing average and maximum utilization per team tier (T1, T2, T3, NOC). The 80% target line is shown. This reveals whether overload is concentrated in a specific tier โ€” most MSPs find T3 engineers chronically over-utilized while T1 has spare capacity.
After-Hours Sessions ยท 8 Weeks
Stacked area chart of total after-hours sessions per week. Useful for correlation with business events โ€” a spike in after-hours often follows a major deployment, new client onboarding, or security incident.
Ticket Velocity vs Headcount ยท 90 Days
Dual-line chart showing open ticket volume and available technician-hours over 90 days. When the ticket line rises faster than the headcount line, stress increases. This is the leading indicator for burnout at the team level โ€” it shows the structural mismatch before individual scores spike.
09
SKILLS MATRIX

The Skills Matrix section is a table of every technology stack in the environment vs the certifications and technicians covering it. It flags skill gaps, SPOF (single point of failure) situations, and demand vs coverage mismatches.

Demand level
HIGH / MED โ€” derived from open ticket count in that technology area. High demand with thin coverage is the most urgent combination.
Certifications
List of certs that cover this skill area, including expired ones shown in red. An expired cert is treated as zero coverage for gap calculations.
Certified Techs
Names of technicians with active certifications covering this area. A single name here is a SPOF โ€” if that technician is sick, on vacation, or resigns, the skill is uncovered.
Open Tickets
Current open ticket count in this technology area. Combined with certified tech count, this shows the per-tech ticket burden in each domain.
Gap flag
Red โš  GAP badge appears when: only one technician is certified (SPOF), a critical cert has expired, or demand is HIGH with fewer than 3 certified technicians. The gap note explains the specific risk.
SPOF RISK In the demo data, SentinelOne EDR and Veeam/Backup both show SPOF โ€” only 1โ€“2 analysts certified against 11โ€“14 open tickets in those domains. If either of those technicians burns out and resigns, those ticket categories become uncoverable. The Skills Matrix is how you see this risk before it happens.
10
FILTERS & NAVIGATION

Three filter controls sit between the KPI strip and the technician grid. All filters apply simultaneously and update the grid in real time without a page refresh.

Risk filter
Dropdown: All Risk Levels, Critical Burnout, High Risk, Medium, Healthy. Use "Critical Burnout" at the start of every manager review to focus on immediate action items only.
Team filter
Dropdown: All Teams, Tier 1 Help Desk, Tier 2 Support, Tier 3 Engineering, NOC. Use this for tier-specific reviews โ€” a T3 lead reviewing their team, or a NOC manager reviewing overnight staff.
Search
Free-text search by technician name. Filters the grid to matching entries. Useful for jumping directly to a specific technician in a large team.

The left sidebar navigation scrolls to each console section: Capacity & Risk Status, Utilization Heatmap, Team Trends, and Skills Matrix. The top navigation bar provides the same section links plus a back link to the MSP Command Center index.

11
BURNOUT SCORE REFERENCE
SCORERISK LEVELRECOMMENDED ACTIONURGENCY
80โ€“100CRITICALNo new T3 assignments. Immediate ticket redistribution. 1-on-1 this sprint. Manager escalation.This week
65โ€“79 + โ†‘HIGH TRENDING UPTreat as Critical. Trend is the key indicator โ€” if utilization has risen 5+ consecutive weeks, act now.This week
65โ€“79 โ†’ stableHIGH STABLEAutomation and runbook interventions. Investigate ticket type mix. Monitor weekly.This sprint
45โ€“64MEDIUMAddress root cause before escalation. Training, tooling, or load balancing appropriate here.This month
0โ€“44 โ†’ stable/upHEALTHYMaintain. Consider for overflow redistribution from Critical/High technicians.Quarterly review
0โ€“44 + โ†“ trending downWATCHLow burnout with falling utilization may indicate disengagement. Investigate separately.This month
12
API ENDPOINTS REFERENCE
ENDPOINTMETHODDATA PROVIDEDPARAMETERS
/v4_6_release/apis/3.0/time/entriesGETAll time entries โ€” base for utilization and overtime calculationdateEnteredOnOrAfter (90 days back), memberIdentifier
/time/entries?filter=overtimeGETOvertime-flagged entries onlySame params + overtime filter
/service/ticketsGETService ticket records โ€” queue depth, timestamps for AH detectionstatus=open, owner.identifier
/system/membersGETTechnician member profiles โ€” name, role, team, certificationsNone required for full member list
/v2/devicesGET (NinjaRMM)Managed device counts per organization โ€” workload contextOrganization filter optional
AFTER-HOURS DETECTION After-hours sessions are not a native ConnectWise flag. They are calculated client-side by reading the dateEntered timestamp on each ticket entry and checking whether it falls outside 08:00โ€“18:00 in the technician's local time zone. When wiring live data, ensure the timestamp comparison uses the correct time zone for your team โ€” UTC timestamps from the API must be converted before the window check.
13
TECHNICIAN DATA FIELDS

Each technician object in the techs array uses this schema. Understanding the schema is required before wiring live API data, as the mapCWMember() function (C2) must return objects that match this shape exactly.

FIELDTYPESOURCEDESCRIPTION
idnumberCW member IDUnique identifier. Used by toggleDive() and DOM element IDs.
namestringCW memberDisplay name shown on card and deep dive.
rolestringCW memberJob title string. Shown on card below name.
teamstringCW memberTeam name. Must match filter dropdown values exactly: Tier 1, Tier 2, Tier 3, NOC.
burnoutnumber 0โ€“100CalculatedComposite burnout score. Calculated from util, ot, ah, and queue fields.
utilnumber %CW time entriesCurrent week billable utilization percentage.
otnumberCW time entriesOvertime hours logged this week.
ahnumberCW ticketsAfter-hours sessions in past 7 days.
ticketsnumberCW ticketsTickets closed this week.
queuenumberCW ticketsCurrent open ticket queue assigned to this technician.
riskstringCalculatedRisk level: critical, high, medium, low. Drives card border color and filter.
trendstringCalculatedUtilization trend: up, down, stable. Based on last 3 weeks of history.
certsstring[]CW member / manualActive certification names. Shown on card and deep dive.
expiredCertsstring[]CW member / manualExpired cert names shown in red with strikethrough.
historyobject[]CW time entries8-week utilization history: [{week:'W1', util:72}, ...]. Powers the deep dive trend chart.
recentTicketsobject[]CW ticketsLast 3โ€“4 recent tickets: {time, title, flag:'ok'|'ot'|'ah', meta}.
recommendationsobject[]Manual / rulesAction recommendations: {icon, title, detail, impact}. Can be generated by rule logic or curated manually.
CONFIGURATION GUIDE
C1
PREREQUISITES

The console runs fully in demo mode with no prerequisites. For live data, the following are required before wiring begins.

ConnectWise PSA Access
API credentials for the ConnectWise REST API. You need a CW API member with a public/private key pair. The member requires read access to: Time Entries, Service Tickets, and System Members. Navigate to CW Admin โ†’ API Members โ†’ Add Member to create one.
NinjaRMM API Token
A NinjaRMM API client with OAuth token. Required for the device count data that provides workload context per technician. NinjaRMM Admin โ†’ Apps โ†’ API โ†’ Add Application.
Azure Function Proxy
Both the CW API and NinjaRMM API require credentials. Those credentials must not be exposed in browser-side code. Route all API calls through the Azure Function proxy (fn-proxy) built during Stage 1. The Function App reads credentials from Key Vault and forwards requests server-side.
SharePoint or Web Host
The HTML file must be served over HTTPS for API calls to work. SharePoint document library is the standard deployment target for this platform.
C2
CONNECTWISE PSA WIRING

ConnectWise provides the core data for the burnout score: time entries (utilization, overtime), service tickets (queue depth, velocity, timestamps for after-hours detection), and member records (names, roles, teams).

STEP 1 โ€” ADD YOUR API BASE URL AND CREDENTIALS TO KEY VAULT

In Azure Key Vault, add three secrets: cw-api-url (your CW instance URL, e.g. https://na.myconnectwise.net), cw-client-id, and cw-api-key (the public+private key pair encoded as publicKey+privateKey in Base64 for Basic auth).

STEP 2 โ€” ADD CW ROUTES TO THE FUNCTION PROXY

In fn-proxy/src/functions/proxy.js, the existing proxy already forwards any path. The CW routes below will work without code changes โ€” just ensure the Key Vault secrets are named correctly and the Function App's API_BASE_URL setting points to your CW instance.

ADMIN TIME EXCLUSION โ€” REQUIRED BEFORE ANY BURNOUT CALCULATION All time entry queries must exclude entries where workType/name is classified as admin, internal, or non-client-facing in your ConnectWise instance. The exact charge code or work type name depends on how your PSA is configured โ€” common values are "Admin", "Internal", "Meeting", "Training", "Company". Check your CW work types under Setup Tables โ†’ Work Types and identify which ones should be excluded. Add them to the conditions filter on every time entry query. If admin time is not excluded, utilization will be overstated and burnout scores will produce false positives.
CALLS YOUR DASHBOARDS WILL MAKE TO THE PROXY โ€” WITH ADMIN EXCLUSION
// Billable time entries only โ€” 90-day window โ€” admin work types excluded GET /api/proxy/v4_6_release/apis/3.0/time/entries ?dateEnteredOnOrAfter=2025-12-20 &conditions=billableOption="Billable" AND workType/name NOT IN ("Admin","Internal","Meeting","Training") &pageSize=1000 // Overtime-only entries โ€” admin excluded GET /api/proxy/v4_6_release/apis/3.0/time/entries ?conditions=billableOption="DoNotBill" AND overtimeHours > 0 AND workType/name NOT IN ("Admin","Internal","Meeting","Training") &pageSize=1000 // Open service tickets โ€” client tickets only, internal ticket types excluded GET /api/proxy/v4_6_release/apis/3.0/service/tickets ?conditions=status/name="Open" AND board/name NOT IN ("Internal","Admin Board") &pageSize=1000 // Member roster GET /api/proxy/v4_6_release/apis/3.0/system/members ?pageSize=200
FINDING YOUR ADMIN WORK TYPE NAMES In ConnectWise: System โ†’ Setup Tables โ†’ Work Types. Export the list and identify every work type that is internal or non-client-facing. These are the values to add to the NOT IN filter. The names must match exactly as they appear in the PSA. When in doubt, pull a sample of time entries for one technician and inspect the workType.name field in the API response before setting the filter.
STEP 2b โ€” VERIFY TIME ENTRY DISCIPLINE BEFORE TRUSTING THE DATA
TIME ETIQUETTE VERIFICATION โ€” DO THIS BEFORE GOING LIVE Before using this console for any management decision, audit a sample of recent time entries for 2โ€“3 technicians directly in ConnectWise. For each technician, check: Are entries logged against specific ticket numbers or against generic catch-all buckets? Are entries spread throughout the day or all logged as a single block at the end? Are hours estimates or actuals? If any technician is logging time in bulk batches against generic codes, the burnout score for that person is not reliable. Address time entry discipline as a team expectation before the console data is treated as authoritative.
STEP 3 โ€” REPLACE buildPayload() WITH LIVE FETCH

At the top of the script section in technician-burnout.html, find the techs array and the comment // โ•โ• DATA โ•โ•. Replace the static array with an async loadTechs() function that calls the proxy endpoints above and maps the results using mapCWMember() and calcBurnout().

MAPPER PATTERN โ€” ADAPT FIELD NAMES TO YOUR CW INSTANCE
async function loadTechs() { const [members, entries, tickets] = await Promise.all([ fetch('/api/proxy/v4_6_release/apis/3.0/system/members').then(r=>r.json()), fetch('/api/proxy/v4_6_release/apis/3.0/time/entries?pageSize=1000').then(r=>r.json()), fetch('/api/proxy/v4_6_release/apis/3.0/service/tickets?conditions=status/name="Open"').then(r=>r.json()), ]); return members.map(m => { const myEntries = entries.filter(e => e.member?.identifier === m.identifier); const myTickets = tickets.filter(t => t.owner?.identifier === m.identifier); const util = calcUtil(myEntries); const ot = calcOT(myEntries); const ah = calcAH(myTickets); // timestamps outside 08:00โ€“18:00 const burnout = calcBurnout(util, ot, ah, myTickets.length); return { id: m.id, name: m.firstName + ' ' + m.lastName[0] + '.', role: m.title, team: mapTeam(m.workRole?.name), // map to Tier 1/2/3/NOC burnout, util, ot, ah, tickets: myEntries.filter(e=>e.billableOption==='Billable').length, queue: myTickets.length, risk: calcRisk(burnout), trend: 'stable', // calc from 8-week history certs: [], // manual or from CW custom fields expiredCerts: [], history: buildHistory(m.identifier, entries), recentTickets: myTickets.slice(0,4).map(mapTicket), recommendations: genRecs(burnout, util, ot, ah), }; }); }
C3
NINJARMM WIRING

NinjaRMM provides device count data per organization โ€” used for workload context in the API banner. It is secondary to the ConnectWise data and can be left in mock/connected state while CW data is wired first.

Endpoint
GET /v2/devices โ€” returns all managed devices. Group by organization ID to get per-org device counts. Add your NinjaRMM base URL and OAuth token to Key Vault as ninja-api-url and ninja-api-token.
Usage in console
Device counts appear in the API banner node labeled "NinjaRMM" and can optionally be used to weight the workload per technician โ€” technicians managing high-device-count clients may have elevated background alert volume that contributes to burnout independently of ticket counts.
PRIORITY ORDER Wire ConnectWise first. CW time entries and tickets provide 90% of the burnout signal. NinjaRMM device data is context, not the core metric. A fully wired CW integration with a mocked NinjaRMM node is production-ready for the burnout scoring function.
C4
CONFIGURE TECHNICIAN DATA

Before wiring live APIs, update the static techs array to reflect your actual team. This ensures the demo runs with realistic data for your environment and the filters (team names, role labels) match your org structure.

  • 1
    Replace technician names and roles.Edit the techs array at the top of the script block. Each object needs name, role, team, and initial values for burnout, util, ot, ah, tickets, queue.
  • 2
    Confirm team names match the filter dropdown.The team filter expects exact string matches: Tier 1, Tier 2, Tier 3, NOC. If your org uses different names (e.g. "L1 Help Desk"), either update the team values in the data or update the dropdown option values in the HTML.
  • 3
    Update certifications.The certs array and expiredCerts array are strings. These also feed the Skills Matrix โ€” the skillAreas array references cert names by string match. If you rename a cert, update it in both places.
  • 4
    Update the skillAreas array.The Skills Matrix is driven by the skillAreas array below the techs array. Update each entry's techs list (technician names), coverage (cert names), openTickets, and gap flag to match your environment.
C5
CUSTOMIZE BURNOUT SCORING

The burnout score thresholds and color rules are hardcoded in four places in the script. When wiring live data, you can tune these to match your organization's norms โ€” an MSP with a higher baseline utilization rate may want to shift the "high" threshold from 80% to 85%.

THRESHOLDCURRENT VALUEWHERE TO CHANGE
Critical burnout score80+renderTechGrid() โ†’ bColor variable ยท renderKPIs() โ†’ risk counts ยท calcRisk()
High burnout score65+Same three locations as above
Critical utilization (card color)88%+renderTechGrid() โ†’ tc-stat-v color on utilization tile
High utilization (card color)75%+Same location
Heatmap: over-utilized90%+renderHeatmap() โ†’ cellColor() function
Heatmap: high utilization75โ€“90%Same cellColor() function
Target utilization band65โ€“80%renderCharts() โ†’ chartUtil dataset label and threshold line
After-hours window08:00โ€“18:00calcAH() function โ€” adjust start/end hour constants
C6
VERIFY & TROUBLESHOOT

Run through this checklist after wiring live APIs. Demo mode requires no verification โ€” the static data renders immediately on page load.

  • โœ“
    API banner nodes show correct dot colors. CW Proxy and NinjaRMM nodes should show green dots. Time Entries and Tickets nodes should show green once live โ€” orange means those endpoints are still using mock data.
  • โœ“
    KPI strip populates with real numbers. If KPI tiles show 0 or โ€” across the board, the techs array is empty โ€” the live data fetch failed. Check the browser console (F12) for the fetch error.
  • โœ“
    Technician cards render in the grid. At least one card should appear. If the grid is empty but no JS error appears, check that the team names in your live data match the filter dropdown values exactly.
  • โœ“
    Clicking a card opens the Deep Dive. The utilization trend chart should render. If the chart area is blank, the history array in the technician object is empty โ€” verify the 8-week history calculation in your mapper function.
  • โœ“
    Heatmap renders with correct technician rows. If the heatmap is empty, the techs array has no items with populated history arrays.
  • โœ“
    All five analytics charts render. If any chart canvas is blank after switching to the Analytics section, the chart was destroyed and not re-initialized. Check that renderCharts() is called on page load and not before the canvas elements exist in the DOM.
  • โœ“
    Skills matrix gaps are accurate. If no GAP flags appear, check that cert names in the techs[].certs arrays match the strings in skillAreas[].coverage exactly โ€” including spacing and capitalization.
  • โœ“
    Filters correctly narrow the grid. Select "Critical Burnout" โ€” only technicians with burnout โ‰ฅ80 should appear. If all cards still show, the risk field in the technician data objects is not being set by calcRisk().
TROUBLESHOOTING QUICK REFERENCE
SYMPTOMLIKELY CAUSEFIX
Grid empty, no console errorsTeam name mismatch in filterCheck that team field values match filter dropdown option values exactly.
Burnout scores all show 0calcBurnout() not receiving correct inputsLog util, ot, ah, queue for one tech and verify each is a number, not undefined.
Deep dive chart blankhistory array empty or malformedEach history entry must be {week:'W1', util:72} โ€” check mapper output.
After-hours always 0Timezone not converted before window checkConvert CW UTC timestamps to local timezone before comparing against 08:00โ€“18:00.
Skills matrix no GAP flagsCert name string mismatchCompare techs[0].certs values against skillAreas[0].coverage values in browser console.
API banner all orange dotsFunction proxy not reachable or returning errorsCheck Function App health endpoint. Verify Key Vault references resolved (App Settings should not show raw @Microsoft.KeyVault string).
CORS error on API callsFunction App ALLOWED_ORIGIN not set to SharePoint URLUpdate ALLOWED_ORIGIN in Function App configuration to match the SharePoint site URL.