The 6-point AI governance framework we apply to every rollout.
Vendor matters less than process. These six controls are the same whether youβre shipping Microsoft Copilot, Anthropic Claude, OpenAI ChatGPT, Google Gemini, or your own self-hosted model. Procurement, security, and legal sign off against this framework β not against vendor logos.
Pair this framework with the case studies at /ai/case-studies: every failure in that gallery violated at least one of the six points below.
- 01
Tenant readiness gate
Before any AI tool ships in your tenant: sensitivity labeling deployed, DLP rules in place, Conditional Access (or equivalent) enforced. We do this audit FIRST, every time.
- 02
IAM-scoped access
Every AI tool gets the narrowest scope possible. SSO + SCIM provisioning. Group-based license assignment. Off-boarding revokes access automatically.
- 03
Data residency mapping
Every prompt + every file uploaded gets a documented data flow. Region, processor, retention, sub-processors. Procurement teams + legal get the diagram.
- 04
Audit logging to your SIEM
Every AI tool that supports it pipes prompts + responses + admin actions to your SIEM. Quarterly review for anomalies + governance drift.
- 05
Quarterly red-team
Prompt injection. Data exfiltration via prompt. Indirect prompt injection via documents. We test the same attack surface a hostile party would, every quarter.
- 06
Kill-switch policy
Any AI tool, any team, can be paused on 24-hour notice if governance incidents surface. The runbook is documented and your CISO has the trigger.
