top of page

FAQ

What is Vivendur?

Vivendur is an enterprise AI control plane designed to help organizations use modern language models while maintaining control over sensitive context, access boundaries, and auditability.

Who is Vivendur for?

Vivendur is built for security, compliance, and governance-conscious organizations, especially teams handling client work, regulated data, or high-value IP (e.g., legal, finance, consulting, biotech, and security-forward tech).

What problem does Vivendur solve?

AI adoption is outpacing governance. Employees and internal copilots can unintentionally expose proprietary or regulated information to external model providers, and organizations lose visibility into what was shared, under what policy, and by whom.

How is Vivendur different from an “LLM firewall”?

Most tools focus on filtering prompts and outputs. Vivendur is designed to address the deeper issue: controlling AI context and “memory” as privileged data so teams can scale AI usage without creating an unmanaged data layer.

Do you store our data?

Vivendur is designed so that sensitive memory and context remain under customer control and governance. Data handling is policy-driven and can be configured to match your risk posture and deployment requirements.

Does Vivendur train models on our data?

Vivendur does not build or train foundation models. Our role is to govern how your organization uses models. Model-provider behavior varies; Vivendur is designed to support policy-based routing and modes that reduce or avoid disclosure of sensitive information.

Can we control what information is shared with a model?

Yes. Vivendur is built to enforce least-privilege context disclosure; sharing only what is necessary for the task, subject to policy.

What do you mean by “stateless” AI?

“Stateless” means the model is treated as a compute engine, not a memory store. Vivendur is designed to separate intelligence (model inference) from memory (customer-controlled context), so organizations can govern and audit what’s used each time.

How do you prevent cross-contamination between clients or projects?

Vivendur supports compartmentalization, so organizations can separate work by client, project, team, or dataset and enforce boundaries between them.

Can we choose which AI models we use?

Yes. Vivendur is designed to be provider-agnostic and to support policy-controlled model selection, so you can align provider choice to sensitivity, cost, performance, or internal requirements.

Can we restrict AI usage for high-sensitivity workflows?

Yes. Vivendur supports policy modes that can restrict or prevent external processing for high-sensitivity contexts, depending on your configuration and deployment.

Do you provide audit logs?

Yes. Vivendur is designed to generate an auditable record for AI interactions so security and compliance teams can answer: what was shared, which policies were applied, and who initiated the request.

What are “receipts”?

A receipt is an audit artifact for an AI request, intended to provide traceability and accountability for enterprise governance.

Can we delete data?

Vivendur is designed with retention and deletion controls so organizations can align AI usage with internal governance, legal holds, and data minimization policies.

How does Vivendur integrate with existing workflows?

Vivendur is designed to integrate where work happens, starting with common enterprise touchpoints (e.g., browser-based usage and team collaboration environments). Exact integration options depend on your environment and pilot scope.

How long does a pilot take?

Most pilots are structured to move quickly: connect Vivendur to a limited set of workflows, define governance policies, and validate visibility and control in real usage.

What does success look like in a pilot?

Typical success criteria include: reduced shadow AI usage, improved audit visibility, clear policy enforcement, and adoption by target teams without slowing work.

Is Vivendur available for healthcare?

Healthcare is on the roadmap after enterprise traction and compliance maturity. If you have a healthcare use case, we can discuss the right pilot boundaries and risk controls.

How do we evaluate Vivendur without exposing sensitive data?

We can run an evaluation with synthetic data, constrained workflows, and conservative policy settings to validate governance and auditability first.

How do we get started?

Request a demo or a pilot. We’ll align on your risk posture, target workflows, and what “good governance” means in your environment.

Join our wait list and ask for a demo

  • Instagram
  • Facebook
  • Twitter
  • LinkedIn
  • YouTube
  • TikTok
bottom of page