Enforce AI policy.
See what is happening.
Prove it to your auditor.
Mandate is a Canadian-built control plane that sits between your people and every AI provider they use. It enforces your configured policies at the point of use and records every decision automatically. The result: a tamper-evident audit trail your auditor can actually verify.
What Mandate produces, for every request
- User
- j.smith@legalfirm.ca
- Tool
- ChatGPT (chat.openai.com)
- Triggered
- SIN pattern · rule SENSITIVE-DATA-001
- Action
- Redact: 3 fields removed
- Timestamp
- May 5 2026, 09:04:37 EDT
- Correlation
- a2f7·9d3e·b1c4·8a00
- Hash
- sha256:3f9a·…·b712
Created automatically. Hash-chained. Ready for your auditor.
What "chain verified" means
- Each event is SHA-256 linked to the one before it.
- Alter, delete, or insert any record — the chain breaks.
- Signed checkpoints anchor the trail at regular intervals.
- The export alone is enough to verify. No Mandate tooling required.
Questions your current tools cannot answer.
-
Can you name the last time sensitive data entered ChatGPT, and what your organization did about it?
-
Can you show an auditor a structured record of what your AI policy actually enforced this quarter?
-
Canadian region. US legal reach. It’s not the same thing, and auditors are starting to ask.
18%
of Canadian organizations have systems in place to govern AI across everyday operations.
IBM Institute for Business Value · May 2026
75%
of Canadian workers using AI rely on unsanctioned, consumer-grade tools — not enterprise-approved solutions.
IBM Institute for Business Value · September 2025
57%
of enterprise employees have entered high-risk information into publicly available AI assistants.
TELUS Digital · 2025
How Mandate works
One policy engine. Four outcomes. One record per request.
Mandate sits inline between your users and every AI provider. The decision (allow, warn, redact, or block) happens at the connection layer, before anything reaches the provider.
Your users & apps
browser
API · app
Mandate Policy Engine
hash-chained audit event written on every decision
AI providers
ChatGPT · Claude
Copilot · others
What you have at the end
Four capabilities. One governance program.
-
Mediation layer
API gateway and network forward proxy connectors route every AI request through Mandate before it reaches any AI provider. No client software distributed to employees.
-
Policy enforcement
Your configured rules apply at the point of use (allow, warn, redact, or block) based on sensitive data patterns, tool usage, and content classification.
-
Tamper-evident audit trail
Structured records of every decision: who, what tool, what policy triggered, what action taken, and when. Hash-chained and signed. The evidence your auditor actually needs.
-
Canadian envelope
Infrastructure owned and operated under Canadian law. Not a US cloud's Canadian region. A legal structure that CLOUD Act and FISA can't reach.
The pilot program
30 days. Written criteria. No ambiguity at day thirty.
One administrator. One afternoon. Nothing deployed to employees. Success criteria agreed in writing before day one. Real traffic. If the pilot doesn't meet your criteria, we'll tell you why.
Learn about the pilot-
Discovery conversation
30 minutes to understand your environment: AI tools in use, traffic flow, data types, and what success looks like. We'll tell you honestly if Mandate is the right fit at this stage.
-
Kickoff and written criteria
Before day one: connector path, traffic scope, and specific measurable outcomes agreed in writing. No ambiguity about what success looks like.
-
30 days on real traffic
Policy enforcement and audit logging live on routed traffic. Admin sees policy decisions and audit records from day one. We're available throughout for configuration questions.
-
Day 31 evaluation
We evaluate against the agreed criteria. If met, we discuss a paid arrangement from day 31. If not, we tell you why. The criteria drive the conversation, not sales pressure.