Responsible AI Posture
Vorantiq orchestrates AI model execution through the Routing Plane. We do not train, fine-tune, or operate foundation models ourselves. Our responsibility is the orchestration substrate — routing, cost governance, audit, observability, and safe execution.
What Vorantiq is and is not
- • A multi-tenant orchestration platform that routes requests to third-party LLM providers (Anthropic, OpenAI, optionally local).
- • A runtime with audit trail, spend governance, and routing observability.
- • A platform that records who did what when (Audit Plane).
- • A platform whose customers retain full control over which models they route to.
- • A model trainer or model owner.
- • A safety classifier, content filter, or RLHF pipeline.
- • A model that is itself responsible for content moderation.
- • A platform that selects models against the customer’s intent.
Provider transparency
Every model call records provider, model name, input/output token counts, selection strategy and decision factors, and outcome. Customers can see exactly which model served each call.
Safety controls — current
Per-tenant authentication and isolation.
Per-action audit trail (canonical hash chain — scaffolded; end-to-end production activation is gated by the Production-Safety Stop).
Per-tenant spend tracking and observe-mode preflight.
Provider circuit-breaker schema for failure isolation.
Tool execution sandbox modes (registry exists; runtime wiring pending).
Safety controls — explicitly NOT implemented today
Prompt-injection input classifier — Not started.
Output classifier (post-response policy filter) — Not started.
Per-tenant safety policy (jurisdiction-aware content classes) — Not started.
Hallucination mitigation via verifier-reflector loop — Architected (B.3).
Memory-poisoning protection in retrieval path — Architected (B.1).
Self-hosted / on-prem deployment — Not started.
Autonomy posture
Today’s runtime reality: every model call is a discrete request scoped to one Vercel function invocation. There is no persistent autonomous loop in production. The orchestrator decomposes a goal via a single LLM call. The verify-reflect-act runtime loop is architected, not implemented. The tool execution loop is architected, not invoked from runtime. Customers should evaluate Vorantiq today as a multi-tenant request-scoped LLM orchestration substrate with strong audit + spend governance scaffolding — not as an autonomous-agent platform.
Data flow into model providers
CROSSES the Vorantiq trust boundary: user prompt, optional system prompt, optional tool schemas, token-count metadata.
Does NOT cross: tenant ID, user ID, session ID, request ID, authentication state, other tenants’ data, audit log content, spend state.
Logging policy for model inputs/outputs
By default Vorantiq does NOT persist full prompt or response content. Token counts, provider, model, latency, error class, request ID, tenant ID, and cost are persisted. Per-tenant opt-in storage with PII scrubbing ships with B.5.1.