FAQ

Questions we hear most

Three questions every institution running multi-model AI needs to answer with confidence — and the ones that come up when they start looking for a governance layer.

Core questions

What is our AI doing?

Contruil gives you full visibility into every AI interaction, routing decision, and model response across your organization. Every request is classified, logged, and surfaced in a supervisory control layer your team can review in real time — not after the fact.

Is it within policy?

Human-governed validation ensures nothing critical moves without authorization. Before any AI action classified as sensitive, regulated, or customer-facing reaches production, it passes through configurable governance gates with human checkpoints. You define the policy. We enforce it.

Can we prove it?

Yes. Tamper-evident records that hold up under audit, regulatory review, or litigation. A cryptographically linked control log captures routing decisions, approvals, model selection, and execution context. Every entry is independently verifiable and suitable for customer attestation or regulatory submission.

About the platform

Does Contruil replace our existing AI providers?

No. Contruil integrates at the API layer across your existing providers — Claude, GPT, Gemini, Perplexity, DeepSeek, and others. You keep the models you use. Contruil adds the governance, routing, and audit layer that sits between your teams and those providers.

How long does deployment take?

The structured pilot runs 30–60 days and is scoped to your existing multi-model environment. Integration happens at the API boundary — no rearchitecting of your underlying systems required.

What does 'four-gate human governance' mean?

Critical AI actions pass through four sequential validation stages before reaching production: classification, policy check, human approval, and audit commit. Each gate is configurable to your internal controls framework. No stage can be bypassed for actions flagged as sensitive or regulated.

How is the audit log protected from tampering?

The control log uses cryptographic linking — each entry references a hash of the prior entry, so any modification to historical records is detectable. The log is suitable for litigation hold, regulatory inquiry, and independent auditor review.

Is Contruil right for us?

What are the signs we need independent AI oversight?

Common indicators: multiple AI providers behind a unified API with no unified governance layer; customer-facing AI outputs with no independent verification; increasing vendor risk questionnaires from clients referencing AI controls; no ability to reproduce the audit record for a given AI-driven decision on demand.

Who within our organization typically sponsors the pilot?

Pilots are typically initiated by Platform Engineering, Security & Infrastructure, Risk & Compliance, or Vendor Risk & Attestation owners. The business case resonates with CISOs, CTOs, and CROs who are fielding AI governance questions from regulators, clients, or their own boards.

What control questions should every institution be able to answer?

Who approved this AI-driven decision? Under which policy was it routed? Which model processed it? Can we reproduce the audit record on demand? Contruil makes those answers immediate — available to your team, your customers, and your regulators.

What happens if our AI strategy has outgrown spreadsheets and manual review?

That is the signal. When your multi-model environment has grown beyond what spreadsheet tracking and manual review can govern, you need a supervisory control plane. The pilot is designed to demonstrate measurable governance outcomes within 30–60 days — giving you and your leadership a clear picture of what independent AI oversight looks like in your environment.

Still have questions?

A Pilot Fit Call is the fastest way to determine whether Contruil is the right fit for your environment.