
About Us
Vivendur is an enterprise AI control plane that turns any LLM into a stateless thinking engine.
AI is moving faster than governance. Teams paste proprietary data into chat tools, internal copilots accumulate sensitive context, and “AI memory” quietly becomes a new, unmanaged data layer. Security and compliance teams can’t reliably answer basic questions: What left the organization? What was exposed to a model? Who approved it? Can we prove it and delete it?
We built Vivendur to make AI usable at scale without surrendering data control.
Our thesis: Models compute. You own memory.
Large language models are powerful but they are not a safe place to store organizational context. Vivendur treats models as untrusted processors and keeps memory and sensitive context inside a customer-controlled boundary.
Instead of relying on a model to “forget,” Vivendur is designed to enforce forgetting by architecture:
- Only the minimum necessary context is disclosed to a model for a single request
- Memory stays in an encrypted vault that you control
- Access is compartmentalized by team/project/client to prevent cross-contamination
- Every AI call is governed by policy and backed by receipts and audit logs
How Vivendur works
Vivendur sits between your users/tools and any model provider, acting as the boundary where control, governance, and accountability live:
1) Encrypted Memory Vault (customer-controlled)
Your raw interaction history and structured memories live in an encrypted vault (local, VPC, or on-prem) under your rules for retention and deletion.
2) Compartmentalized “Vaults” (least-privilege by design)
Work is separated into strict compartments (by client, project, department, legal matter, deal room, or regulated dataset) so one vault’s context can’t leak into another.
3) Context Compiler (minimal disclosure)
For each request, Vivendur compiles a token-budgeted context pack: only the most relevant snippets, optionally redacted or transformed, with an explainable record of why each element was included.
4) Policy Engine (AI egress control)
Vivendur applies enterprise policies to every request: which models/providers are allowed, what data can leave a vault, when to require local-only inference, and what must be blocked or transformed.
5) Receipts & Audit Trails (compliance you can prove)
Every model call produces a receipt: who initiated it, which vault was accessed, what context was sent (or hashed), what was redacted, and which policies were applied; so you can answer “what left the organization, to whom, and why?”
6) Provider-agnostic routing
Vivendur can route requests across providers or private deployments based on policy, cost, and risk without locking you into a single model vendor.
Where we’re going
We’re enterprise-first, starting with industries where leakage is existential: legal, finance/private equity, consulting, biotech R&D, and security-forward tech. As the platform matures, Vivendur expands into healthcare workflows where regulated data demands even stricter containment and auditability.
Our mission
Make AI safe by default by separating intelligence from memory; so organizations can adopt AI with control, compartmentalization, and accountability built in.
If you’re rolling out AI and need real governance (not just filters) talk to us about a pilot.
​
​