
The problem you cannot see
In March 2023, three engineers at a major Korean electronics manufacturer pasted proprietary source code, internal meeting notes, and chip database content into a public consumer LLM across three separate incidents within roughly a month. The company banned generative AI on corporate devices in May 2023 and began building an internal alternative.
The story is famous in IT circles. What is less appreciated is that it is also typical. Three years later, in the IBM 2025 Cost of a Data Breach Report, 97% of organizations that suffered an AI-system breach in 2025 lacked proper AI access controls. The Samsung incident did not stop the pattern. It defined it.
Shadow AI — the use of unauthorized AI tools by employees, on personal accounts, with corporate data — is the single largest blind spot in mid-market security in 2026. And conventional DLP is not catching most of it.
The numbers your CISO does not want to hear
Pulled from a triangulation of IBM, Gartner, MIT NANDA, and Netskope research published across 2025 and early 2026:
- 68% of employees use unauthorized AI tools at work (Gartner, across 500 companies).
- 90% of companies' workers use chatbots, with most hiding it from IT (MIT GenAI Divide research, summarized in Fortune).
- Only 34% of AI tool usage runs through approved enterprise accounts. The rest is personal logins on free tiers.
- 73.8% of ChatGPT accounts touching corporate data are non-corporate accounts lacking enterprise privacy controls.
- 38% of employees have shared sensitive information with AI tools without employer permission.
- Average cost of a shadow-AI-related breach: $4.63M — $670K above the standard breach baseline (IBM 2025 Cost of a Data Breach).
- Average detection time for shadow-AI breaches: 247 days — because nobody is looking.
If you sat in a board meeting and said "we have a fairly modest security incident risk that costs us roughly $4.6 million when it happens, that we will not detect for eight months, and that 38% of our employees are actively contributing to," the conversation would change.
That is the actual situation.

Why DLP is not catching this
Conventional Data Loss Prevention (DLP) was designed for a specific threat: an employee deliberately or accidentally sending a sensitive file (a customer list, a contract, a financial spreadsheet) outside the company perimeter via email or a file-sharing service.
Shadow AI breaks every assumption in that model.
The data is not a file. It is a chat message. The DLP rule that flags an Excel attachment with PII does not flag a paste-into-a-textarea on chat.openai.com.
The destination is not blacklisted. chatgpt.com, claude.ai, gemini.google.com, perplexity.ai are all legitimate websites your DLP cannot block without breaking productivity.
The user is not malicious. They are efficient. They want to clean up a customer list, summarize a long email thread, draft a proposal, debug a snippet of code. The DLP that flags them as a threat will be turned off by the team manager who needs them productive.
The data exits via the user's personal account, not the corporate one. An employee on their phone hotspot, on their personal laptop, pasting a client roster into ChatGPT from a personal Gmail-linked account, never touches your corporate network. Your DLP literally cannot see it.
The McDonald's McHire example: shadow vendor selection
In June 2025, the McHire chatbot used to screen McDonald's job applicants exposed roughly 64 million applicant records. The breach happened because an admin account was protected by the password "123456."
The interesting governance failure is not the password. It is how the chatbot vendor was selected. The decision was made by the franchising organization without a formal vendor risk review. Nobody from corporate IT ran a security questionnaire. Nobody from legal reviewed the data-handling clauses. Nobody from the broader infosec function pen-tested the vendor's admin surface.
That is also shadow AI — the slow, unsanctioned creep of AI vendors into your environment via budget that bypasses procurement. Marketing buys an AI copywriting tool. Sales buys an AI prospecting tool. HR buys an AI interview-screening tool. Each on departmental budget, none routed through IT vendor risk.
A 2026 CIO survey found that roughly half of unsanctioned AI tools in mid-market environments were introduced by line-of-business leaders, not by individual employees. The C-suite is contributing to the problem they are afraid of.

The five-step shadow-AI playbook
We run this playbook with every client we onboard. It is not glamorous. It works.
1. Discovery — find the AI you don't know about
Pull two weeks of DNS logs and proxy logs. Search for the domains: chatgpt.com, openai.com, anthropic.com, claude.ai, gemini.google.com, bard.google.com, perplexity.ai, copilot.microsoft.com, character.ai, midjourney.com, runwayml.com, suno.com, ideogram.ai, pika.art, replicate.com, huggingface.co, ollama.com, lmstudio.ai.
Aggregate the results. If your environment has 100 employees, expect to find traffic from 60-80 of them. The list is your starting position.
Pair this with a workforce survey: "Do you use AI tools at work? Which ones? On which accounts?" Anonymize aggressively to get honest answers. Compare the survey results to the DNS logs. The delta is the visibility gap.
2. Sanction the obvious
For the AI tools your employees are clearly using productively, get an enterprise license. ChatGPT Enterprise. Claude Pro. Microsoft Copilot. Whatever your team is reaching for on personal accounts — pay for the corporate version with the no-training contract clause. The unit economics of doing this are dramatically better than the cost of the breach you are otherwise inviting.
The sanctioned tools need to be at least as usable as the shadow ones, or employees will keep using the shadow ones. This is the lesson Samsung learned the hard way.
3. Block what you cannot sanction
For AI tools that fail your vendor risk review (free-tier Character.AI on a corporate device, random Hugging Face Spaces apps, a sketchy "AI summarizer" Chrome extension), block them at the proxy. Communicate why — the policy is not "no AI." It is "AI through approved tools only."
4. Wire AI-aware DLP
Modern DLP tools (Microsoft Purview, Netskope, Zscaler, Forcepoint, etc.) now have AI-specific modules that inspect HTTP POST bodies to known AI service endpoints, flag PII / PCI / PHI / source code patterns, and either block submission or alert. Deploy this for your sanctioned AI tools. It will not catch personal-device shadow AI, but it dramatically tightens the corporate-device path.
5. Train, audit, repeat
A written AI acceptable-use policy that gets signed in HR onboarding. Quarterly refreshers. A documented incident-response runbook for "an employee just told me they pasted client data into ChatGPT." A semi-annual audit of approved-tool usage versus shadow-tool DNS traffic. A line in your cyber insurance renewal questionnaire that you can answer "yes" to.
This is not theatrical compliance. It is what your insurance carrier is starting to require.
The cyber insurance shift you might have missed
The 2026 cyber insurance market has explicitly forked AI off as a separate risk class. The Insurance Services Office introduced new AI-related exclusions effective January 2026. Several major carriers introduced absolute AI exclusions in 2026 renewals — meaning standard cyber policies no longer cover AI incidents at all unless you carry an affirmative AI rider.
The questions on your renewal questionnaire now include things like "Do you have an AI acceptable-use policy?" and "Do you have technical controls preventing employees from sending sensitive data to public LLMs?" Answering "no" or leaving them blank measurably moves your premium and, increasingly, gates coverage entirely.
Cowbell launched its Prime One product specifically for $250M-$1B revenue mid-market in early 2026, with affirmative AI coverage and up to $10M limits. Insurance Business has been calling AI liability "the new cyber for SMEs" — meaning it is about to follow the same trajectory cyber did: niche, then standard, then mandatory, then expensive.
The shadow-AI playbook above is no longer optional. It is the prerequisite for keeping your insurance.
The work, and the offer
The free 90-minute IT health check we run for prospective clients includes a shadow-AI baseline assessment: a sample of your DNS logs against the AI-domain list, a structured survey of your employees, and a gap report against the five-step playbook. Yours to keep either way.
The full picture of how we govern AI across vendors lives at /ai/governance. The case-study gallery — including the McDonald's McHire example, the Samsung leak, and seven other failures — is at /ai/case-studies.
Shadow AI is the threat your DLP is not catching. The good news is that you can fix it. The better news is that you have to — your insurance carrier is about to make sure of it.




