
The case that changed AI deployment
The dollar amount was almost trivial. $812 in damages. A grieving customer who tried to claim a bereavement-fare refund based on what the airline's website chatbot had told him. The airline declined the refund, saying the bot's instructions did not match the actual policy. The customer took it to the British Columbia Civil Resolution Tribunal in 2023.
The airline made an argument that, in retrospect, will be remembered as the canonical example of how not to defend an AI deployment. They argued that the chatbot was a "separate legal entity" responsible for its own actions. The tribunal rejected the argument flatly: the chatbot was a service the airline provided on its website, the airline owned what its website said, and the bereavement-fare instructions the chatbot gave were a representation made by the airline.
The customer was awarded the $812 in February 2024.
The legal community noticed immediately. The case was small. The principle was not. Every business considering a customer-facing AI deployment now has to plan around what one MSP partner has called "the rule that 'the AI said it' is not a defense."
Why the case mattered more than the dollar amount
Three things made the ruling consequential:
It was a tribunal ruling, not a press release. The decision sits in the public record as binding precedent in British Columbia and persuasive precedent across Canada. Lawyers in other jurisdictions read it and update their advice. Insurance carriers read it and update their underwriting. The legal posture toward AI shifted in a way it would not have if the airline had quietly settled.
The "separate legal entity" defense was novel and tested. Before this ruling, there was open speculation in legal circles about whether AI outputs could be treated as something akin to a third-party recommendation — the way you might not be liable for misinformation a Wikipedia article gave to your customer. The tribunal closed that question for AI agents you deploy under your own brand. The bot is yours. The output is yours.
The hallucination rate is rising, not falling. Enterprise-grade LLM deployments in customer-facing roles have been measured at roughly 18% hallucination rates in independent testing as of 2026. AI hallucinations contribute to legal exposure in 17-34% of AI-assisted legal workflows according to current SQ Magazine data. The Air Canada-style incident is a category of risk, not a one-off.

The pattern repeats — quickly
In June 2023, two New York personal-injury attorneys filed a federal brief in a case against a Latin American airline that contained six fabricated case citations generated by ChatGPT. They doubled down when the court asked for the underlying opinions. The judge sanctioned both lawyers and the firm $5,000 each and dismissed the case. The matter is now part of every legal technology presentation in the country.
In late 2023, a storied US sports magazine was found to be running product-review articles bylined to authors who did not exist, with AI-generated headshots from a stock-portrait site. The publisher blamed a third-party content vendor. The staff union demanded transparency. The brand damage took months to recover from.
In October 2025, Deloitte was paid roughly $440,000 to produce an Australian government report. The report was found to contain AI-fabricated citations and references that did not exist. A separate $1.6M Canadian government Deloitte report was later found with similar issues. The press cycle that followed was sustained.
In each case, the organization owned the output. The vendor was not held responsible. The AI was not a defense.
What an AI verification gate actually looks like
The pattern that prevents these incidents is not technical. It is operational.
A verification gate is a documented step in your workflow that requires a human to review, sign off on, and take accountability for any AI-generated output that meets specific criteria. The criteria typically include:
- Any output that will be sent to a customer.
- Any output that will be filed with a regulator or court.
- Any output that will be published under your brand.
- Any output that creates a binding representation (a price quote, a policy interpretation, a legal opinion, a medical recommendation).
The gate has three components: the AI produces a draft, a human with subject-matter authority reviews and approves it, and that approval is logged in a way that creates an evidence trail.
For high-volume workflows, the gate can be partially automated. Examples:
Customer service chatbot. AI generates the response. The response is shown to a human agent who clicks "approve and send" before the customer sees it. The agent's identity is logged. For Tier-1 questions where the answer is well-grounded in canonical FAQ documents, the agent approval can be batched (review-and-approve in groups of 10 with one click). For anything that interprets a refund policy or makes a financial commitment, the approval is per-message.
Legal brief drafting. AI generates the draft. The supervising attorney verifies every case citation against the actual case (or the verification is done by a paralegal whose work is signed off by the attorney). The verification is documented in the matter file before the brief is filed.
Marketing copy under a byline. AI generates the draft. A named editor reviews and signs off. The byline is the editor's, not "AI" or a fabricated name. If you cannot stand behind the work product under a real human's name, do not publish it.
The gate slows the workflow. That is the point. The Air Canada precedent says the slowdown is now the cost of doing business. Many deployments are still cheaper than the alternative even with the gate in place — a customer service chatbot that produces 30% deflection with human approval is still saving you 30% of the labor cost. The deployments that fail are the ones that try to skip the gate to maximize the savings.

The contract clauses procurement now wants
The Air Canada precedent has changed what enterprise procurement teams look for in contracts with AI vendors. The five clauses we now see consistently in 2026 RFPs:
1. No-training contractual commitment. The vendor commits in writing that customer prompts and outputs are not used to train any model. This is standard on enterprise tiers (Microsoft Copilot Enterprise, ChatGPT Enterprise, Claude for Work, Gemini for Workspace). It is NOT standard on free tiers — which is part of why Shadow AI matters.
2. Data residency attestation. The vendor commits to specific geographic regions for prompt processing and storage. Required for healthcare, legal, financial services, and any enterprise serving EU customers.
3. Liability for hallucination-driven harm. This is the controversial one. Vendors push back. Buyers push for it anyway. The compromise that often lands: the vendor accepts liability for outputs that violate their published accuracy claims, capped at the contract value. Most AI vendor contracts in 2024 had blanket "we are not liable for AI outputs" clauses; sophisticated buyers in 2026 are no longer signing them.
4. Audit rights against training data and guardrails. The right to audit (or to receive third-party audit reports for) the vendor's training data sources, model evaluation methodology, and safety guardrail implementation.
5. Incident notification and cooperation. The vendor commits to notifying you within a specified window (typically 72 hours) of any security incident affecting your data, and cooperating with your incident response. SOC 2 Type II reports are usually the basis for the rest of the security claims.
These five clauses are now table stakes for sophisticated B2B AI procurement. If your vendor will not negotiate any of them, you are signing a contract that the Air Canada precedent has rendered uncomfortable.
What this means for your AI deployment
The actionable takeaways:
-
Map every AI surface that touches a customer or external party. Chatbots, automated email, AI-generated reports, AI-augmented support tickets, AI-drafted contracts. Each one is a potential Air Canada incident.
-
Implement verification gates before launch, not after. The gate is not optional. The cost of the gate is the cost of the deployment. Build it in.
-
Do not use free-tier consumer AI for any business-critical output. Pay for enterprise tiers with no-training contracts. The cost difference is trivial; the legal difference is enormous.
-
Update your AI vendor contracts. Push for the five clauses above on every renewal. Walk away from contracts that include blanket "we are not liable" clauses.
-
Update your cyber insurance. As of January 2026, several major carriers introduced absolute AI exclusions in standard cyber policies. You may need an affirmative AI rider. Check your policy.
The work, and the offer
The free 90-minute IT health check we run for prospective clients includes an AI verification-gate assessment: a survey of your customer-facing AI surfaces, a contract-clause review for your top three AI vendors, and a recommendation on whether your current cyber insurance covers what you are deploying. Yours to keep either way.
The full picture of how we govern AI across vendors lives at /ai/governance. The case-study gallery — including the airline chatbot ruling, the lawyers fined for hallucinated cases, and nine other documented failures — is at /ai/case-studies.
The Air Canada ruling did not invent AI liability. It just established that "the AI said it" is no defense. Anyone deploying customer-facing AI in 2026 needs to plan around that as the operating reality. The good news is that the playbook is well-understood. The bad news is that nobody is going to ship the playbook for you — you have to do the work.




