πŸ‡¨πŸ‡¦VancouverπŸ‡¨πŸ‡¦TorontoπŸ‡ΊπŸ‡ΈMiamiπŸ‡ΊπŸ‡ΈOrlandoπŸ‡ΊπŸ‡ΈLos Angeles
1-855-KOO-TECH
KootechnikelKootechnikel
Insights Β· Field notes from the SOC
Plain-language briefings from the people watching the alerts.
Weekly Β· No spam
Capabilities Β· what Copilot actually does in 2026

"Copilot" is fourteen surfaces, not one product.
Here is what each one does β€” and where each one falls short.

Buyers ask "what does Copilot do?" and get a marketing answer. Below is the operator-grade answer: what each Copilot surface actually does in 2026, with prompts that work, time savings reported (with vendor caveats noted), and an honest list of where each one falls short. Read the surfaces relevant to your role; skip the rest.

Copilot in Outlook

What it does: Drafting replies in your tone (Copilot voice learns from sent items), thread summarization, "Catch me up" morning brief, natural-language meeting scheduling, automated meeting prep one-pagers 15 minutes before each call.

Prompts that work
  • "Summarize the last 20 emails on Project Atlas. Pull out decisions, open questions, and who owes what."
  • "Draft a polite decline to this vendor pitch β€” keep it warm, two sentences."
  • "What did I miss while I was out Mon–Wed? Group by project."
Time savings: Microsoft Work Trend Index 2025 reported ~30 minutes/day saved on email for regular Copilot-in-Outlook users. Forrester TEI modeled 132% three-year ROI for Outlook + Teams Copilot users (vendor-commissioned, directional).
Where it falls short: Hallucinates names and dates from ambiguous threads. Tone-matching degrades on emails over 1,500 words. Cannot summarize threads that branched and merged across multiple subject lines. Will not read shared mailboxes by default.

Copilot in Teams

What it does: Speaker-attributed live meeting recap (delivered ~60s after meeting end), intelligent recap with chapter markers and "moments where you were mentioned," 50+ language real-time translation (Wave 3), in-meeting Q&A, channel catch-up ("what did I miss in #atlas-eng this week"), action item extraction directly into Planner/To Do.

Prompts that work
  • "Summarize this meeting for someone who joined at minute 22."
  • "List action items with owners and due dates. Format as a table."
  • "What disagreements came up? Who held which position?"
  • (In channel) "What's the status of the Q3 launch? Pull from the last 30 days."
Time savings: Microsoft Work Trend Index 2025: meeting-heavy roles report ~40% reduction in post-meeting note-taking time. Vodafone (vendor case study, 68,000 licensed users): "1.5 hours saved per employee per week" across Teams + Outlook Copilot.
Where it falls short: Misattributes speech in cross-talk-heavy meetings (4+ active speakers). Translation drops technical jargon and product names. Cannot summarize meetings where transcription was disabled. Channel summaries miss reactions and emoji signals β€” important context in async-first teams.

Copilot in Word

What it does: The killer feature is "draft from existing files" β€” combining multiple SharePoint documents into a single output matching a template tone. Plus: draft from prompt, rewrite tone, summarize across embedded comments, transform format (FAQ, briefing, table). Visual auto-suggest (Wave 2) proposes pull-quotes and table conversions inline.

Prompts that work
  • "Draft a one-page executive summary of /Reports/2025-Q4-Audit.docx. Lead with the three findings rated 'high.'"
  • "Combine /Atlas/RFP-draft.docx with /Atlas/Pricing-v2.docx into a single client-ready proposal. Use the cover-letter format from /Templates/Proposal-cover.docx."
  • "Rewrite the Risks section in active voice. Cut by 40%."
Time savings: Lumen Technologies (Microsoft case study): ~4 hours/week saved per knowledge worker on document creation. KPMG Australia: 30% reduction in time to first draft on advisory deliverables (vendor, directional).
Where it falls short: Cannot maintain consistent style across 40+ page documents. Citations to source docs are sometimes mis-attributed. Tables get reformatted unpredictably on regeneration. Tracked Changes integration is shallow β€” cannot review your editor's redlines and counter-suggest. PULLS FROM ANY DOCUMENT THE USER CAN ACCESS β€” this is the over-permissioning landmine.

Copilot in Excel

What it does: Formula generation, prompt-based data analysis, "explain this spreadsheet," chart generation with rationale, what-if scenarios, Python in Excel integration (GA Q1 2025) running in Anaconda-secured cloud sandbox. Wave 3 added fuzzy-matching deduplication and automated data cleaning.

Prompts that work
  • "Convert this raw export into a pivot summary by region and quarter."
  • "Use Python to fit a seasonal ARIMA model to Sheet1!Revenue and project the next 8 quarters. Plot with confidence intervals."
  • "What's wrong with the formula in cell H42?"
Time savings: Microsoft customer studies cite 8–12 hours/month saved per finance team analyst on routine analysis. Bayer (vendor): analysts shifted "30%+ of routine reporting time" to higher-value work.
Where it falls short: Requires data in a structured Excel Table (Ctrl+T). Misreads merged cells and multi-row headers. Will write a working formula that's subtly wrong on edge cases (volatile functions, array spilling). Cannot reliably reason across multiple sheets in one workbook for complex models. Python in Excel is sandboxed β€” no internet, no custom packages outside Anaconda's curated set.

Copilot in PowerPoint

What it does: Generate deck from a Word doc (the workhorse β€” pulls headings β†’ slide titles, body β†’ bullets, tables β†’ tables). Generate from prompt. Designer suggestions apply brand templates. Speaker notes generation. Slide consolidation. Translation while preserving layout.

Prompts that work
  • "Build a 10-slide deck from /Sales/Atlas-proposal.docx. Use template /Brand/2026-corporate.potx. Skip the appendix sections."
  • "Add speaker notes for slides 4-8 explaining the financial model in plain language."
  • "Make this deck more visual β€” fewer bullets, more imagery."
Time savings: Microsoft customers commonly cite reducing routine internal-deck creation from "3-4 hours to 30-45 minutes" (vendor, directional).
Where it falls short: Designer choices are templatized β€” not for keynote-quality external presentations. Imagery suggestions skew generic stock unless you've published an organizational asset library. Animations and transitions are not generated. Charts pulled from Excel often lose interactive elements.

Copilot Chat (cross-app)

What it does: The cross-app jewel. Microsoft 365 Chat / Business Chat. Cross-app queries grounded in your Graph data (email + Teams + files + calendar + people). Web grounding toggle. Work mode (tenant) vs Web mode (open) switch. File upload for ad-hoc analysis without saving (Wave 3).

Prompts that work
  • "Summarize what's happening on Project Atlas across email, Teams chats, and files in the last 30 days. Highlight anything Marcus has been escalating."
  • "Who at our company knows about Federal contract vehicles?"
  • "Draft a status update for my manager based on my last 2 weeks of activity."
  • "Find all documents about pricing for the Lumen account modified in the past 90 days."
Time savings: Highest-leverage Copilot surface for cross-functional roles (Chiefs of Staff, EAs, project managers, account leads). Hard to quantify β€” the value is "questions you couldn't answer before."
Where it falls short: Search recall is imperfect β€” frequently misses files in Teams channels you haven't visited recently. Cross-app queries can take 30+ seconds. Citations link to sources but don't always quote the relevant span. Work grounding requires the full M365 Copilot license β€” Copilot Chat at $5/user/mo does NOT include tenant grounding.

Copilot Pages

What it does: A collaborative AI-native page that lives in OneDrive. Start with a Copilot Chat answer β†’ "Edit in Pages" β†’ become a multi-user collaborative document that retains the AI as a participant. Pages can be referenced as a source in subsequent Copilot prompts (closing the loop). Output ports cleanly to Word / Loop / email.

Prompts that work
  • Project status pages where current state is co-authored by humans + Copilot summaries from Graph data
  • Brainstorming workshops where Copilot acts as a third-party participant
  • Onboarding briefs assembled by HR + Copilot for incoming hires
Time savings: Niche but high-value for the use cases that fit. Adoption-bound β€” most users do not know Pages exists.
Where it falls short: Discoverability β€” sits behind the Chat UI. No mobile parity yet.

Microsoft Researcher (Wave 3)

What it does: A reasoning-mode agent for deep work. Combines OpenAI deep-research reasoning (and, in Frontier-program tenants, Anthropic Claude Opus 4.7 via Cowork) with M365 tenant data + web grounding. Use cases: market research briefs, competitive analyses, structured literature reviews. 5–15 minute query latency. Multi-page output with citations.

Prompts that work
  • "Build a competitive landscape brief on managed XDR providers serving Canadian mid-market. Cite the 5 we mentioned in the last quarterly review and add 3 you find."
  • "Synthesize all internal docs about our Q3 launch plus public market context. Output as a board-ready brief."
Time savings: Compresses what was previously a multi-day research engagement into a 15-minute query + 30-minute human review.
Where it falls short: Latency means it is not a chat-style tool β€” submit and walk away. Citation rigor is good, not Perplexity-grade.

Microsoft Analyst (Wave 3)

What it does: Reasoning-mode agent for data work. Inspects raw CSV/Excel/SharePoint list data, writes Python, runs iteratively, produces a deck-ready narrative with charts. Roughly equivalent in shape to ChatGPT Code Interpreter / Advanced Data Analysis but with M365 grounding.

Prompts that work
  • "Take the file at /Reports/Q4-pipeline.xlsx and find the patterns that explain why our average deal size dropped 14% QoQ."
  • "Run a cohort retention analysis on /Data/customer-events.csv. Output as a chart-by-chart narrative."
Time savings: For analyst-light teams (small finance, small ops), Analyst replaces a contract data scientist for routine ad-hoc analyses.
Where it falls short: Will produce confident narratives from messy data β€” output requires human review for analytical correctness, not just charts.

Copilot Studio

What it does: Low-code agent platform built on Power Platform. 1,500+ pre-built connectors. Bring-your-own-LLM (added 2025; Claude via Frontier program). Triggers (event-based agents). Autonomous agents preview (Wave 3) β€” agents that run on triggers with no human in the loop, governed by guardrails. ~$200/mo per tenant license band, message-based metering.

Prompts that work
  • See /copilot/studio for the seven canonical agent archetypes and the agent factory pattern
Time savings: Custom agents consistently deliver 3–5x the ROI of base Copilot alone in measured production deployments.
Where it falls short: Pricing model is confusing β€” "messages" consumed differently per action type. Authoring complexity is closer to Power Apps than ChatGPT. Quality bottlenecked by source-content quality (the SharePoint hygiene problem again).

Copilot Cowork (Anthropic Claude)

What it does: Microsoft's integration of Anthropic Claude inside the M365 Copilot tenant. Launched March 9, 2026; in Frontier program March 30; included in E7 May 1. In Researcher, choose "Reasoning model: OpenAI o3" or "Anthropic Claude Opus 4.7." In Copilot Studio, model picker for an agent now includes Claude alongside Azure OpenAI.

Prompts that work
  • "In Researcher, run this competitive analysis with Claude Opus 4.7 β€” I want the long-context reasoning Anthropic is known for."
Time savings: Closes the gap on use cases where Claude historically outperformed GPT β€” long-context reasoning, structured outputs, agentic coding. Now available inside the existing Copilot governance perimeter.
Where it falls short: Full Copilot for Word / Excel / Outlook still uses OpenAI models β€” not user-selectable. No fine-tuning of Claude on tenant data through this surface. Data plane is still Microsoft's; Anthropic does not see your tenant data.

Microsoft Security Copilot

What it does: Incident triage, natural-language KQL generation for threat hunting, malware/PowerShell reverse engineering, identity investigations. Compute-unit consumption-priced (~$4/SCU/hour). Production deployments run $2,500–$15,000/mo depending on incident volume.

Prompts that work
  • "Show me lateral movement attempts in the last 14 days from compromised service accounts."
  • "Explain what this PowerShell script does."
  • "Triage this incident β€” give me the kill chain and recommended containment."
Time savings: Microsoft randomized study (2024): new SOC analysts 26% faster + 35% more accurate. Bridgewater Associates: "automated 80% of routine SOC analysis" (vendor).
Where it falls short: Quality dependent on Microsoft security stack ownership (Defender + Sentinel + Entra + Purview). Cross-vendor SIEM integration is workable but second-class. Hallucinated KQL syntax requires analyst review.

GitHub Copilot Enterprise

What it does: Multi-tier developer Copilot. Code Completion (~46% of new code in opted-in repos suggested by Copilot per GitHub). Copilot Chat (IDE + github.com). Workspace (task-to-PR agentic). Code Review (automated PR review). Coding Agent (autonomous, async). Multi-model: Claude Sonnet/Opus, Gemini, OpenAI o3, GPT models. $39/user/mo Enterprise tier.

Prompts that work
  • "Generate tests for this function covering the edge cases."
  • "Review this PR β€” focus on concurrency and the new logging path."
  • "Refactor this file to use async/await."
Time savings: GitHub randomized study (2024): developers complete tasks ~55% faster (n=95, narrow controlled task; widely cited but should be quoted carefully). Accenture enterprise study: 90% developer satisfaction.
Where it falls short: Hallucinated APIs in less-popular libraries. Test generation produces brittle, pass-by-default tests. Architectural reasoning is weak β€” won't tell you a service boundary is wrong.
FAQ

Capabilities, in plain English.

Can Copilot read my emails and files automatically?

Copilot reads from the active user's mailbox and any SharePoint, OneDrive, and Teams files the user has access to β€” under their identity, via the Microsoft Graph API. It does NOT access shared mailboxes by default (delegation must be set up explicitly), it does NOT read other users' content, and tenant admins can scope access via Restricted SharePoint Search and Restricted Content Discovery. The behavior is identical to "what would happen if the user clicked through every file they can access" β€” which is exactly why pre-Copilot SharePoint permissions hygiene matters so much.

Does Copilot retain my data or train on it?

No. Microsoft 365 Copilot is covered by Microsoft's Enterprise Data Protection commitments. Tenant data is not used to train foundation models. Prompts and grounding queries are processed within the Microsoft service boundary. With the Cowork integration to Anthropic Claude, Anthropic does not retain or train on tenant data either β€” confirmed in both Microsoft's Frontier program documentation and Anthropic's enterprise terms.

What are the prompts that work best in Copilot?

Microsoft recommends the Goal / Context / Expectations / Source structure for every prompt. Goal: what you want produced. Context: who is the audience and why does this matter. Expectations: format, length, tone constraints. Source: which documents, threads, or data should ground the answer. Vague prompts ("write me an email") return generic output; structured prompts grounded in named sources return work-product-quality drafts. Every successful Copilot deployment maintains a curated organizational prompt library β€” typically 50-200 vetted prompts organized by role.

How does Copilot in Excel actually work?

Copilot in Excel requires data in a structured Excel Table (Ctrl+T to convert a range). Once in a table, prompts can generate formulas, perform data analysis, create charts, run what-if scenarios, and (as of Q1 2025) write Python that runs in a cloud Anaconda sandbox. Common pitfalls: merged cells, multi-row headers, unstructured ranges, and complex multi-sheet workbooks all degrade Copilot's reasoning. The largest unlock for finance teams is Python in Excel β€” fits a seasonal ARIMA, runs a cohort analysis, generates a Plotly chart inline, all without leaving the workbook.

What is Copilot Researcher and how is it different from Copilot Chat?

Copilot Chat is a conversational interface that returns answers in seconds, grounded in tenant data + optional web. Copilot Researcher is a reasoning-mode agent that takes 5-15 minutes per query and produces multi-page outputs with structured citations. Use Chat for "summarize this," "draft that," "find these"; use Researcher for "build me a competitive landscape brief" or "synthesize everything we have on this customer plus public market context." Researcher in Frontier-program tenants can route reasoning to Anthropic Claude Opus 4.7 via Cowork.

Capabilities are the what. Adoption is the work.

Knowing what Copilot can do is necessary but not sufficient. Production value comes from prompt libraries, role-based training, champion networks, and the SharePoint permissions hygiene that keeps the cross-app surfaces from surfacing the wrong content.