Basics โ How AI Works
Basics
Artificial Intelligence (AI)
Software that can perform tasks that normally require human thinking โ like reading text, answering questions, writing, or spotting patterns in data.
Think of it as a very fast, very well-read assistant that learned by reading enormous amounts of text โ not by living in the world.
Basics
Large Language Model (LLM)
The type of AI behind tools like ChatGPT, Claude, and Copilot. It's trained on massive amounts of text and predicts the most useful next word (or phrase) to generate a response.
Like autocomplete on your phone, but trained on billions of pages instead of just your texts โ so it can write whole documents.
Basics
Prompt
The text you type to an AI tool. It's your question, instruction, or request. The quality of your prompt directly affects the quality of the AI's response.
If you hired a smart contractor and gave them vague directions, the work would be vague. A clear brief gets a better result. Same with prompts.
Basics
Token
The unit AI uses to measure text. Not exactly a word โ more like a word-chunk. "Unbelievable" might be 3 tokens. "cat" is 1. AI APIs charge by tokens consumed.
Like charging by the letter, but grouped into short syllable chunks. More text in = more tokens = higher cost.
Basics
Context Window
How much text an AI can "see" at once in a conversation. Models today can hold 128,000 to 1,000,000 tokens โ roughly entire books โ before they start forgetting earlier content.
Like working memory. A human can only hold so much in mind at once. When the conversation exceeds the window, the AI loses the oldest parts.
Basics
Hallucination
When an AI confidently states something that is wrong. It doesn't know it's wrong โ it's generating plausible-sounding text, not checking a fact database.
Like a colleague who fills gaps in their knowledge with confident guesses. Sounds right, but you still need to verify anything important before acting on it.
Basics
Training Data
The text an AI was trained on โ books, websites, code, articles. The model learned patterns from this data. It doesn't "remember" specific pages, but its responses reflect what it absorbed.
Like an education. A doctor's responses come from years of reading medical literature. An AI's responses come from whatever it was trained on.
Basics
Knowledge Cutoff
The date after which an AI has no information. If something happened after its cutoff, the model simply doesn't know about it unless you provide the context yourself.
Like asking someone who's been on a remote expedition for a year about current events. Smart, capable โ just not up to date.
Basics
Input / Output
Input = what you send to the AI (your prompt, context, documents). Output = what the AI sends back (its response). Both are measured in tokens and affect cost on API plans.
You ask a question (input). The AI answers (output). Output costs more because generating text is more work than reading it.
Models โ Choosing the Right AI
Models
Model
A specific trained version of an AI. Different models from the same company (like GPT-4.1 vs GPT-5) have different capabilities, speeds, and costs. Picking the right one for the task saves money.
Like choosing between a junior analyst and a senior consultant. Both can help โ but you wouldn't hire the senior for every task.
Models
GPT (OpenAI)
OpenAI's family of models โ GPT-4.1 is the cost-effective default, GPT-5 is higher quality (and higher cost). The models behind ChatGPT and Microsoft Copilot.
GPT-4.1 is your go-to workhorse. GPT-5 Reasoning is for genuinely complex problems โ don't use it for routine tasks or you'll overspend.
Models
Claude (Anthropic)
Anthropic's AI model family. Sonnet 4.6 is the recommended default โ good balance of quality and cost. Opus is the high-capability tier for complex, high-stakes work.
Think of Sonnet as your dependable daily driver, Opus as the specialist you bring in for the hardest cases.
Models
Reasoning Model
A model variant that "thinks through" a problem before answering โ spending extra tokens on internal reasoning steps. Slower and more expensive, but much better for complex logic tasks.
Like the difference between a snap judgment and sitting down to work through a problem on paper. More effort, more reliable for hard questions.
Models
API
A way for software systems to talk to each other. When your tools call an AI model programmatically โ not through a chat window โ they're using the API. API usage is billed by token.
Like a waiter at a restaurant โ it's the interface between your app (the customer) and the AI (the kitchen). You order via the API, the AI delivers the response.
Models
Preview / Experimental Model
A model that's available but not yet fully stable. Behavior, pricing, and availability may change without notice. Not suitable for building client-facing automations you depend on.
Like a beta software release โ use it to explore, but don't build critical workflows on it until it reaches general availability.
Models
Token Caching
A cost-saving feature where repeated input (like a system prompt sent with every request) is cached. Anthropic's prompt cache charges ~50% less for cached input tokens.
Like saving a template you use every day โ instead of retyping it every time, you reuse the saved version at a discount.
Models
System Prompt
Hidden instructions given to the AI before the user types anything. It sets the AI's role, tone, rules, and context. In MSP AI Resource Hub tools, this is how the AI "knows" it's an MSP assistant.
Like the briefing you give a new hire before their first day โ it shapes how they approach every conversation that follows.
Agents โ AI That Takes Action
Agents
AI Agent
An AI that doesn't just answer questions โ it takes actions. It can call APIs, read data, make decisions, and trigger other tools to complete a multi-step task without you doing each step manually.
Like hiring someone who can independently run a whole process โ not just answer your question, but actually go do the thing.
Agents
Pipeline
A sequence of steps an AI agent runs in order. Each step builds on the last โ one model might plan, another might execute, another might verify. The output of one stage feeds into the next.
Like an assembly line. Each station does its part and passes the work forward. No one station does everything.
Agents
Multi-Agent
Using more than one AI model in a workflow, each doing what it's best at. In our stack, Copilot generates code and Claude reviews logic โ two specialists instead of one generalist.
Like having a builder and a building inspector on the same project. The builder builds fast; the inspector catches what the builder would miss about their own work.
Agents
Orchestration
Coordinating multiple agents or pipeline steps โ routing tasks, managing handoffs, and deciding what happens next based on results. Copilot Studio handles orchestration in our MSP stack.
Like a traffic controller at a busy intersection. It doesn't drive โ it decides who goes where and when.
Agents
Human-in-the-Loop
A checkpoint in an AI pipeline where a human must review or approve before the process continues. Prevents automated errors from cascading across systems without anyone noticing.
Like a purchase approval threshold โ small things go through automatically, but anything above a certain risk level needs a human to sign off.
Agents
Trigger
What starts an agent pipeline. A trigger can be manual (someone clicks a button), scheduled (runs every night at 2am), or event-driven (fires when an alert comes in from your RMM).
Like an alarm clock. You set the condition, and it fires the action when the condition is met.
Agents
ROI / Break-Even (AI)
Return on investment โ whether the time and money a pipeline saves exceeds what it costs to run. Break-even is how many months until setup costs are recovered from monthly savings.
If a pipeline costs $500/month to run but saves 20 engineer hours at $75/hr, it returns $1,500/month โ that's ROI-positive by $1,000.
Agents
Automation vs Agent
Automation follows a fixed script (if X then Y). An agent can reason and adapt โ it can decide what to do next based on what it found, and handle situations the script didn't anticipate.
A thermostat is automation. A human HVAC tech is an agent โ they can diagnose and adapt. AI agents sit between the two, closer to the tech.
Security โ AI Risk & Data Safety
Security
PII
Any data that identifies a real person โ full names, email addresses, phone numbers, Social Security numbers, IP addresses linked to individuals. Never paste PII into a public AI tool.
If someone found it in a data breach and could identify or contact a specific person with it โ it's PII. Treat it accordingly.
Security
Shadow AI
AI tools employees use without IT or security review. Just like shadow IT, they may have no DPA, no SOC 2 compliance, and unknown data retention. A top-ranked risk in our register.
Like using a personal Dropbox for company files because it's faster than SharePoint. Convenient โ until there's a breach.
Security
DPA
A legal contract between your company and an AI vendor defining how your data is handled, retained, and deleted. Required for GDPR, CCPA, and most enterprise client contracts. No DPA = no approval.
The paperwork that says "you can't sell what I give you, you'll delete it on request, and here's what happens if there's a breach."
Security
SOC 2 Type II
An independent security audit that verifies a vendor's controls have been tested and working over time (usually 6โ12 months). The gold standard for enterprise vendor trust. Type I is a point-in-time snapshot; Type II is better.
Like a continuous performance review vs a one-day interview. Type II proves the controls actually work day-to-day, not just on audit day.
Security
HIPAA
U.S. law protecting health information. Any AI vendor handling patient data or PHI (Protected Health Information) must have a BAA (Business Associate Agreement) in place. No BAA = not suitable.
The legal framework that says "health data is sensitive, handle it with documented safeguards, or face significant fines."
Security
Training Opt-Out
Whether you can prevent the AI vendor from using your prompts and outputs to train future models. Enterprise and API tiers typically have this off by default. Consumer free tiers often don't.
The difference between a contractor who keeps your work confidential vs one who puts it in their portfolio. Always confirm before pasting client data.
Security
AI-Generated Code Risk
Code written by AI can contain vulnerabilities, hardcoded credentials, or insecure patterns if deployed without review. AI-generated code must go through the same security review as human-written code.
A fast typist who makes mistakes is still a fast typist who makes mistakes. Speed doesn't equal quality โ review before you deploy.
Security
AI Phishing
Attackers using AI to craft highly convincing phishing emails, fake voices, or deepfake video. AI lowers the skill bar for impersonation โ attacks look and sound more legitimate than ever before.
Phishing used to be easy to spot by bad grammar. Now AI can write a perfect email impersonating your CEO. Verify anything unusual through a second channel.
Copilot โ Microsoft AI in Your Tools
Copilot
Microsoft 365 Copilot
AI built into Word, Excel, Outlook, Teams, and PowerPoint. It reads your Microsoft 365 data (emails, files, meetings) to give contextual answers without you pasting anything. $30/user/month add-on.
Like an assistant who's already read all your emails and attended all your meetings โ so you don't have to catch them up every time.
Copilot
Microsoft Graph
Microsoft's API that connects Copilot to your tenant's real data โ emails, calendar events, Teams messages, files. It's why Copilot can answer "what happened on this project last week?" without you pasting anything.
The connective tissue that lets Copilot read your actual work data instead of operating in a vacuum.
Copilot
Copilot Studio
Microsoft's low-code platform for building custom AI agents. Point it at SharePoint docs, databases, or external APIs โ then deploy as a Teams bot or website widget. $200/month + $0.01โ0.03 per message.
A toolbox for building your own AI assistant, without needing to write backend code. You define what it knows and what it can do.
Copilot
Power Automate
Microsoft's workflow automation tool. In multi-agent pipelines, it's the "plumbing" โ it passes Copilot's output to Claude, or triggers actions in other systems based on what the AI decided.
The conveyor belt between stations. It doesn't do the work โ it moves the work from one place to the next automatically.
Copilot
Grounded Response
When an AI answers based on specific documents or data sources you've pointed it at โ not just its training. "Graph-grounded" means using your real Microsoft 365 data. Reduces hallucination risk.
The difference between asking someone what they think vs. asking them to read the report first, then answer. Grounded = read the report first.
Copilot
Business Chat (Copilot)
Copilot's cross-app mode that can pull from your emails, calendar, Teams, files, and SharePoint simultaneously to answer questions spanning your whole work context.
Instead of asking one tool at a time, Business Chat is like asking a research assistant who has access to everything at once.
Copilot
Connector
A pre-built integration that lets Copilot Studio agents talk to external services โ ConnectWise, ServiceNow, Salesforce, and 1,000+ others. Connectors let agents take action in those systems, not just read from them.
Like a plug adaptor โ it's what lets your agent "plug in" to an existing system it wasn't originally built for.
Governance โ Responsible AI Use
Governance
AI Acceptable Use Policy
The policy defining what employees can and cannot do with AI tools at work. Covers approved tools, prohibited data types, required review before sending output to clients, and consequences for violations.
The rulebook โ what's in bounds, what's out of bounds, and what happens if you cross the line.
Governance
AI Risk Register
A tracked inventory of known AI-related risks โ data leakage, shadow AI, hallucination in client output, over-reliance on automation โ each with a severity score, owner, and controls in place.
Like an insurance policy checklist. It doesn't prevent bad things from happening, but it means you've thought them through and have a plan when they do.
Governance
Vendor Assessment
A checklist review of any new AI tool before it's approved for use โ covering SOC 2, DPA, data retention, training opt-out, MFA support, and breach notification. A single "No" on a required item blocks approval.
Like vetting a subcontractor before giving them access to your client's building. You wouldn't skip that step just because their website looks professional.
Governance
AI Incident Response
The plan for what to do when something goes wrong with AI โ a data leak via a public AI tool, a hallucination sent to a client, an automated pipeline that misfired. Who to notify, what to contain, how to document.
Like a fire drill. You hope you never need it. But when you do, everyone knowing their role is what prevents chaos.
Governance
Change Advisory Board (CAB)
The group that reviews and approves AI agent requests before build begins. In MSP AI Resource Hub' framework, Change Management runs the CAB function โ no agent moves to build without CAB sign-off and Risk Management clearance.
Like a planning committee. They don't build the thing โ they make sure it should be built before anyone picks up a tool.
Governance
Controls Mapping
Documenting which AI use cases trigger which compliance criteria (like SOC 2 CC6.7 for data sharing) and what controls are in place to address them. Required for audit readiness.
A ledger that shows auditors: "Here's the risk this activity creates, here's the control that mitigates it, here's the evidence it's working."
Governance
AI Bias
When AI outputs reflect skewed patterns from its training data โ producing responses that may be unfair, discriminatory, or inaccurate for certain groups or contexts. A compliance and reputational risk for client-facing use.
If you only taught someone history from one perspective, their answers would be biased toward that view โ even if they're smart and well-meaning.
Governance
FedRAMP
A U.S. government certification for cloud services used to handle federal data. If your clients operate under DFARS/CMMC, any AI tool processing controlled data must meet FedRAMP Moderate or Higher. Direct ChatGPT API does not qualify โ Azure OpenAI does.
The federal government's vendor vetting process. It's like SOC 2, but stricter โ and required for government contract compliance.
// no matching terms found โ try a different search or filter