Get Started

Outgate sits between your app or agent and the model providers you use.

Instead of calling OpenAI or Anthropic directly, you send requests through the gateway, where you can apply guardrails, control routing, and observe everything in one place.

Getting started comes down to three steps: connect a provider, create an endpoint to access it, and send your first request.

Connect a provider

Tell Outgate where your model lives and how requests should authenticate.

A provider is a connection to an upstream API, such as OpenAI, Anthropic, or any compatible service. When you create one, you define the upstream URL and how requests should be authenticated.

Most teams start with auth forwarding. In this mode, your client continues to use its own credentials, and Outgate forwards them to the upstream provider. This is the fastest way to get started because it does not require migrating or storing keys centrally.

If you prefer, you can store credentials in Outgate and let the gateway handle authentication on your behalf.

While setting up the provider, you can also attach a guardrail policy. The default policy works out of the box and detects sensitive values in requests before they reach the model. You can later replace it with a custom policy if you need stricter rules.

Checkpoint

At this point, you have connected Outgate to a model, but you do not yet have a concrete endpoint for clients to use.

Create a shareable endpoint

Create a share to get a gateway URL and key for your client.

To actually use your provider, create a share. A share gives you a concrete way to send traffic through the gateway. It produces a gateway URL and a key that your client can use. You can think of it as a named access point to your provider.

There are two ways to authenticate requests through a share. The default is to include the key directly in the URL, which works well for quick setups and local development. Alternatively, you can pass credentials through environment variables, which is often more natural when working with tools that expect API keys to be present before startup.

Once the share is created, you have everything you need to route traffic through Outgate.

Send your first request

Use the CLI to validate your setup quickly.

At this point, you can start sending requests manually, but the fastest way to validate your setup is to use the CLI.

curl -fsSL https://outgate.ai/download/install.sh | sh
og login
og status

The login flow opens a browser by default. If you are working over SSH or inside a container, you can use a non-browser flow or provide a token file instead.

Once authenticated, the CLI can wrap existing tools and route them through your gateway without requiring code changes.

Run your agent through the gateway

Launch Claude Code or Codex directly through Outgate.

og claude

# or
og codex

If you have configured multiple providers, you can choose one explicitly. Otherwise, the CLI tries to pick a reasonable default based on what is available.

Under the hood, the CLI resolves configuration from several sources. Flags take precedence, followed by project configuration in .og.yaml, then environment variables, then global defaults. This lets you set things once per project and avoid repeating them in every command.

For example, placing a simple .og.yaml file in your repository lets you pin a provider or share so your team uses the same setup automatically.

Optional: test guardrails before production

Run detection without sending real requests to a model.

If you attached a guardrail policy, you can test it before sending live traffic. The CLI provides a scan mode that runs detection only.

Scan mode processes your files, identifies sensitive values, and stores them as fingerprints so they can be recognized later without exposing the original data.

This is useful when you want to validate policies before enabling them on production traffic.

What you have now

A complete request path through Outgate.

By this point, you have set up a complete path through Outgate: your tool or app sends a request -> the gateway applies guardrails -> the request reaches your provider -> the response comes back through the same path.

You did not have to modify your client, and you now have a single place to control and observe how requests are handled.

Where to go next

Layer in routing, stricter guardrails, and observability when you are ready.

From here, most teams start adding more control: routing between providers, tightening guardrail policies, or monitoring traffic and latency.

The core flow stays the same: everything goes through the gateway.