FAQ
Frequently asked questions
Honest answers to the questions we hear most from security, compliance, legal, and IT leaders at Canadian organizations. Something not covered here? Get in touch.
A concrete example
An accountant pastes a client spreadsheet containing Social Insurance Numbers into ChatGPT to ask for a summary.
Doesn't using an Azure or AWS Canadian region solve the data sovereignty problem?
No. A Canadian region means your data is physically stored in Canada. It doesn't change the legal jurisdiction of the company operating that infrastructure.
Amazon, Microsoft, and Google are US-headquartered companies. The CLOUD Act (2018) compels US companies to produce data on demand of US authorities, regardless of where the data is physically stored. AWS ca-central-1 is in Montreal. Amazon is still a US company. A US court order can reach that data.
Mandate runs on Canadian-owned infrastructure under Canadian legal jurisdiction. That's a structural difference, not a regional preference. Your privacy officer and counsel can evaluate the legal significance directly; we're not providing legal advice, but the legal profile here is structurally different from a Canadian region on a US cloud.
What does a policy enforcement decision actually look like in practice?
A concrete scenario: an employee at an accounting firm opens ChatGPT and pastes a client spreadsheet containing Social Insurance Numbers and financial projections to ask for a summary.
With Mandate routing that traffic:
- The request reaches the Mandate Policy Engine before it reaches OpenAI.
- The sync scan detects SIN and financial data patterns matching your configured rules.
- Depending on your policy, Mandate either redacts the sensitive fields before forwarding the request, warns the employee and logs the event, or blocks the request with an explanation.
- The interaction is logged: who, what tool, what policy was triggered, what action was taken, and when.
- The employee continues working. The AI still responds, with redacted context, or if blocked, with a policy-compliant message explaining what happened.
- Your admin sees the event in the dashboard within seconds.
When your privacy officer or auditor asks "what actually happened with AI and sensitive client data this quarter," you have a timestamped, structured record, not a policy PDF and a guess.
We already tell employees not to use AI with sensitive data. Isn't that enough?
Policy documents create obligations. They don't produce enforcement or evidence.
The practical test: if a regulator, auditor, or opposing counsel asked you today to produce a record of every time an employee sent sensitive data to an AI tool in the last 90 days, and what your organization did about it, could you? If the answer is no, you have a policy but not a program.
Mandate doesn't replace your AI policy. It gives your policy a mechanism and a record.
Can't we just build this ourselves?
Some organizations can. Most underestimate what "this" actually involves: DLP rule maintenance across model provider API changes, accurate token and content logging without impacting latency, multi-tenant policy isolation, audit retention aligned to counsel-reviewed schedules, and ongoing updates as new AI tools emerge.
Mandate is purpose-built and maintained for that job. If your team's time is better spent on your core product or services, that's the practical reason to use Mandate rather than build it.
What AI tools does Mandate cover?
Any AI tool you route through Mandate: ChatGPT, Claude, Copilot, Gemini, and other AI APIs your organization uses. Coverage is directly tied to what you route through Mandate's connectors: an API gateway path for application and developer traffic, or a network forward proxy for browser-based AI tools across the organization.
We're honest about scope: Mandate enforces and logs traffic it sees. Coverage expands as you route more traffic through it. We'll tell you clearly what a specific deployment covers and what it doesn't.
What about employees using AI on personal devices or outside our network?
Mandate doesn't govern AI traffic it doesn't see. If an employee uses a personal device on a personal network, that traffic won't route through Mandate unless your network policy is configured to cover it.
The forward proxy path covers devices and networks you control: company-managed devices, the office network, and VPN-routed traffic. The API gateway path covers application and developer traffic that uses your configured API credentials.
Most organizations find that the majority of their sensitive-data risk comes from AI use on managed devices during work hours. That's where Mandate's coverage is real, measurable, and deployable in a pilot. We discuss the exact scope in the discovery conversation and put it in the written pilot criteria. If your threat model requires governing personal-device AI use, we'll tell you upfront whether the current deployment model addresses it.
Does Mandate work with Microsoft 365 Copilot?
Microsoft 365 Copilot presents a specific challenge: it's integrated deeply into M365 apps (Word, Excel, Teams, Outlook), and that traffic doesn't flow through a standard HTTPS proxy the same way browser-based ChatGPT use does.
Mandate's forward proxy path covers browser-based AI tools — ChatGPT, Claude, Gemini, and AI accessed via browser. For M365 Copilot specifically, coverage depends on whether your Microsoft tenant traffic routes through Mandate's proxy, which varies by your M365 configuration and network architecture. For developer and API traffic (GitHub Copilot via API, Azure OpenAI Service API calls), the API gateway path covers that directly.
M365 Copilot coverage is one of the first things we ask about in the discovery conversation. If complete Copilot governance is a requirement, we'll tell you exactly what it takes to achieve it — or where the current limits are — before you commit to a pilot.
What do employees see when their request is blocked or redacted?
Redact: The request is forwarded with sensitive fields removed. The employee receives the AI provider's response based on the sanitized prompt. Depending on your policy configuration, they receive a notification that content was removed before forwarding.
Block: The request is stopped. The employee receives a configurable policy-compliant message explaining that the request was blocked and which category of content triggered the rule. They're not left with a cryptic error.
Warn: The request is forwarded and logged. The employee receives a notification that the interaction was flagged against a policy rule. The intent is a visible accountability moment without disrupting their work.
Mandate's defaults are designed to be transparent with employees about what happened and why, without exposing internal rule logic. The specific notification messages are configurable. Most organizations find that the warn outcome — where employees are informed their AI use was flagged — creates a behavior change effect beyond what blocking alone achieves.
Do we need to replace our existing security stack?
No. Mandate is purpose-built for AI traffic governance and sits alongside your existing web security, DLP, and SIEM tools. It's not a Secure Web Gateway or SASE product for all enterprise traffic. It's the enforcement and audit layer for AI-related traffic you route through it.
Mandate is designed to export structured audit events to SIEM and webhook endpoints, so it fits into existing security operations rather than requiring them to change.
Does Mandate make us compliant with PIPEDA or provincial privacy laws?
Mandate provides technical controls that support your compliance program: policy enforcement, logging, and a verifiable audit trail. It doesn't substitute for legal advice, and we won't tell you that deploying Mandate makes you "PIPEDA compliant." Compliance is a legal determination your counsel and auditors make, not a software feature.
What Mandate does: it gives your counsel and auditors something concrete to evaluate. That's a meaningful contribution to a compliance program. It's not the program itself.
Is Mandate relevant if we're facing a CPAB inspection, OSFI review, PHIPA audit, or Law Society requirement?
These are exactly the situations where what Mandate produces matters most. CPAB, OSFI, PHIPA, and Law Society inquiries increasingly ask about AI tool usage and data handling. The common thread is demonstrable control: not a policy document, but evidence that the policy was enforced and that you know what happened.
Mandate produces the structured, timestamped audit trail that answers those questions directly: which AI tools were used, by whom, on what data categories, what policy rules triggered, what action was taken, and when. The records are hash-chained and verifiable as unaltered after the fact.
We're not compliance certifiers and we won't tell you that Mandate satisfies a specific regulatory obligation. What we'll tell you is that the audit trail Mandate produces is designed to answer the questions these bodies ask. Your counsel and auditors evaluate legal adequacy; Mandate gives them something concrete to work with.
Our compliance program needs to demonstrate AI governance to a client's legal team or auditor. What does Mandate actually produce?
Mandate produces structured, exportable audit records in JSON format. Each record contains: the user who made the request, the AI tool used, the timestamp, the correlation ID, the policy rule that triggered, the decision outcome (allow / warn / redact / block), the redacted field identifiers if applicable, and the hash chain link for tamper verification.
For a compliance engagement or legal questionnaire, you can export the audit log for any date range, filtered by user, tool, or outcome. The exported records are machine-readable and structured, designed for review by counsel, auditors, or client compliance teams, not just internal IT.
The hash chain means the records can be verified as unaltered: any modification of an audit event breaks the chain. If a client's legal team asks whether the records can be relied on as a contemporaneous, unmodified log, that is the architecture that supports your answer.
Our cyber insurer is asking about AI governance controls. Does Mandate help?
Yes, and this question is coming up with increasing frequency as insurers update their AI-specific policy requirements.
What Mandate produces maps directly to what most cyber insurers are now asking for: evidence that AI tool usage is monitored and governed, a structured timestamped audit record of AI interactions, documentation of policy rules enforced during the coverage period, and data residency evidence. Most standard cyber insurance questionnaires now include an "AI acceptable use controls" section — Mandate addresses it.
We're not insurance brokers and we won't tell you what your specific insurer requires. What we can provide is the technical documentation your broker needs to evaluate coverage, and a clear statement of what Mandate does and doesn't govern. Whether that satisfies a specific insurer's requirements is a question for your broker.
One practical note: several insurers are beginning to require demonstrated AI governance controls — not just a written policy — as a condition of coverage or as a factor in premium calculation. Mandate's audit trail and enforcement record is the type of evidence that distinction is asking for.
How do I make the business case for Mandate to my CFO or board?
The business case starts with three numbers from external sources your board can verify:
- $144M/year — what AI irregularities cost large Canadian enterprises in aggregate (IBM Institute for Business Value, May 2026). At the organization level, a single governance failure costs more than Mandate's annual fee.
- ~$6.32M — average cost of a Canadian data breach (IBM Cost of Data Breach Report). A single PIPEDA investigation and remediation typically runs $50K–$500K+. Mandate's annual cost is a fraction of either.
- 18% — the share of Canadian organizations that currently have AI governance systems in place. Your board is likely already aware this gap is a board-level risk item.
The internal framing that works: Mandate isn't an IT cost. It's the evidence that demonstrates the organization took reasonable steps to govern AI use. That evidence has direct value in three situations: a regulatory or audit inquiry, a client vendor questionnaire, and a board risk presentation. The question for your CFO isn't "what does Mandate cost?" — it's "what does one AI-related incident cost, and what does it take to demonstrate governance before it happens?"
We can provide a one-page summary of the IBM study data and Mandate's position for internal use. Ask when you contact us.
What happens to AI traffic if Mandate is unavailable? Does everything stop?
Mandate is configured fail-closed by default: if the Mandate Policy Engine is unreachable, AI requests are blocked rather than forwarded without governance. This is the conservative posture. It prevents ungoverned traffic from reaching AI providers during an outage.
Fail-open configuration is available for organizations that require uninterrupted AI access and accept the risk that requests proceed ungoverned if Mandate is down. It's a deliberate choice with a documented trade-off, not a default setting.
We discuss fail behaviour at pilot kickoff and document the chosen posture in the written success criteria before day one. There's no ambiguity about what happens during an outage.
What does Mandate actually see in each AI request?
API gateway path: Mandate receives the full API request body, including the prompt text and the provider response. This is necessary to evaluate policy rules against request content.
Forward proxy path: Mandate decrypts HTTPS traffic (TLS inspection required) to evaluate request content before forwarding to the AI provider. Your organization installs Mandate's CA certificate once at the network level; no changes to employee browsers or applications.
What is written to the audit record by default: request metadata (user identity, tool, timestamp, correlation ID), the policy decision, the triggered rule identifier, and the redacted field identifiers (the pattern type that matched, not the matched content itself). Raw prompt bodies are not stored by default.
Full prompt body capture is opt-in per tenant, governed by a separately configured retention schedule, and off by default for all tenants. We discuss your data minimization requirements at kickoff.
How do audit events get into our SIEM?
Mandate exports structured audit events in JSON format. Each event includes: tenant ID, correlation ID, user, tool, timestamp, policy decision, triggered rule, action taken, redacted field identifiers, and the hash chain link. The format is importable by any SIEM or SOAR platform that accepts structured JSON.
Webhook delivery is available for targets that accept a signed JSON payload. Microsoft Sentinel and Splunk are the integration targets we cover in the pilot; other targets follow the same webhook format. We're honest about integration state: if a native connector for your specific platform isn't yet available, we'll tell you that rather than have you discover it at day 31.
Can Mandate run in our private network or on-premises?
Mandate is currently a hosted SaaS service running on Canadian-owned infrastructure. On-premises or private-network deployment isn't available today.
If your organization requires AI traffic to never leave your own infrastructure, a Mandate deployment isn't currently possible. We'll say that clearly in the first conversation rather than at day 31 of a pilot. On-premises deployment is a future product consideration; we won't commit to a timeline we can't hold.
The current architecture runs on Canadian-owned infrastructure under Canadian legal jurisdiction, which addresses the sovereignty concern most Canadian organizations face. If your requirement is specifically about on-premises deployment for reasons beyond sovereignty (network isolation, air-gapped environments), that's an important conversation to have at the start.
You're an early-stage company. What happens to our audit records if Mandate shuts down?
A fair concern, and one we take seriously enough to have a documented answer.
Your audit records are your data. At any point during your subscription, you can export your complete audit log in structured JSON format. That export is yours to retain, store, and submit to auditors or regulators without Mandate's involvement. The records are hash-chained — they can be verified as unaltered without requiring Mandate to vouch for them.
When your subscription ends, for any reason including Mandate shutting down, your records are available for export for 30 days. After that window, they're deleted from our systems. We don't retain your data beyond the off-boarding period.
The practical implication: if you run a 30-day pilot and export at the end, those records are yours regardless of what happens to Mandate. Most buyers who ask this question decide the audit trail quality is worth retaining before they make any longer-term commitment.
We're a young company. The honest answer is: run the pilot, evaluate the record quality and governance value, and make your own assessment of the vendor risk. We'd rather you ask this question before signing than discover it matters later.
How do we get started?
The first step is a short conversation to understand your environment, what connectors make sense for your setup, and what success would look like for a pilot. Learn how the 30-day pilot works, or get in touch directly.
Mention your industry type and the AI tools your team currently uses. It helps us make the first call productive rather than spending the first 15 minutes on context.