Capability

Route every AI request intelligently

Control how requests flow across models, providers, and environments, automatically.

Outgate Routers let you define fallback logic, balance traffic, and dynamically choose the best model for every request.

One entry point. Full control.

Every request enters through a single gateway and is routed based on your logic:

  • Fallback when providers fail
  • Split traffic across models
  • Dynamically select the best model per request
  • No client changes required
ClientClaude Code, Open WebUI, Codex, API
Smart RouterAI-Powered Selection
A
Anthropic Claude Opus 4.6IQ 53 · Speed 56.9 · $5/$25
A
Anthropic Claude Sonnet 4.6IQ 52 · Speed 63.2 · $3/$15
O
OpenAI GPT 5.4IQ 57 · Speed 82.0 · $2.5/$15
L
Ollama GLM 5IQ 50 · Speed 75.9 · $1/$3.2
ClientClaude Code, Open WebUI, Codex, API
Smart RouterAI-Powered Selection
A
Anthropic Claude Opus 4.6IQ 53 · Speed 56.9 · $5/$25
A
Anthropic Claude Sonnet 4.6IQ 52 · Speed 63.2 · $3/$15
O
OpenAI GPT 5.4IQ 57 · Speed 82.0 · $2.5/$15
L
Ollama GLM 5IQ 50 · Speed 75.9 · $1/$3.2
Based on scoring criteria per model, the router evaluates every request in real time and sends it to the best match. Save cost, use the most capable models when they are really needed, and route to the most secure model when the prompt demands it.

Failover routing

Always have a backup

Define a priority order of providers. If one fails, the next takes over automatically.

  • Sequential fallback across providers
  • Fast retries with configurable timeouts
  • Model overrides per fallback

Try the best. Fall back when needed.

Weighted routing

Control traffic distribution

Split requests across providers or models using weights.

  • A/B test models in production
  • Gradually roll out new providers
  • Balance cost vs performance

Decide where traffic goes.

Smart routingPro

Let the system decide

Outgate evaluates each request and selects the best model automatically.

  • Choose based on quality, speed, and cost
  • Adapt per request, not per config
  • Respect your preferences and constraints

Send each request to the best possible model.

One decision, per request

For every request:

  1. 01Evaluate available models
  2. 02Score them based on quality, speed, and cost
  3. 03Select the best match
  4. 04Forward the request

Optional

  • Block unsafe requests
  • Remove unnecessary tools
  • Apply guardrails inline

One router, many providers

Route across:

  • OpenAI
  • Anthropic
  • Self-hosted models (Ollama, vLLM)
  • Custom APIs

All behind a single endpoint.

ClientClaude Code, Open WebUI, Codex, API
Smart RouterMulti-Layer Routing
Failover Afailover
Failover Bfailover
B
Bedrock Claude Opus 4.6EU Stockholm
B
Bedrock Claude Opus 4.6EU Frankfurt
B
Bedrock gpt-oss-120bEU Frankfurt
B
Bedrock gpt-oss-120bEU Stockholm
ClientClaude Code, Open WebUI, Codex, API
Smart RouterMulti-Layer Routing
Failover Afailover
B
Bedrock Claude Opus 4.6EU Stockholm
B
Bedrock Claude Opus 4.6EU Frankfurt
Failover Bfailover
B
Bedrock gpt-oss-120bEU Frankfurt
B
Bedrock gpt-oss-120bEU Stockholm
Build composite routing by chaining routers together. Each layer operates independently — smart selection, failover, or weighted — in any combination.

Route smarter. Ship faster.

Control how every request flows across your AI stack.

Frequently asked questions

Requests automatically move to the next configured upstream in failover mode.
Today, each router uses a single strategy. Combining strategies is a planned capability.
No meaningful overhead. Routing happens inline at the gateway.
Yes. Each upstream can define a specific model override.
It evaluates candidates using quality, speed, and cost signals, along with your configured preferences.