Capability

Keep PII and credentials out of model calls

Outgate Guardrails sit in the gateway path, scan request bodies for personal information and credentials, and replace matched values with stable placeholders before traffic reaches the upstream model.

Responses are restored on the way back to the client, while logs and upstream providers stay free of raw secrets and personal data.

PII message
Hey Johnpii_3e8d1cName, please send the invoice to john.doe@acme.compii_8f3a2bEmail
and CC sarah.smith@corp.netpii_c4d1e9Email. My phone is +1-555-0142pii_7b2f4aPhone.
Credentials file
DB_USER=admincred_u2x4k1Username
DB_PASS=s3cretcred_a1b2c3Password
DB_HOST=db.internalcred_h3j5m7Server
API_KEY=sk-proj-abc123def456cred_d4e5f6API Key
AWS_SECRET=AKIAIOSFODNN7EXAMPLEcred_g7h8i9AWS Key
Sent to LLM (anonymized)
LLM Response
Sure, I'll send the invoice to pii_8f3a2bjohn.doe@acme.com and CC pii_c4d1e9sarah.smith@corp.net.
The database connection uses
credentials cred_a1b2c3s3cret at the specified
endpoint with key cred_d4e5f6sk-proj-abc123def456.

How it works

01

Detect

We scan requests for emails, API keys, passwords, and other sensitive values.

02

Replace

Sensitive values are replaced with stable placeholders before the model sees them.

03

Restore

Responses are mapped back to the original values before returning to the client.

Built-in protection for real-world AI traffic

PII detection

Emails, phone numbers, names, addresses, and identifiers.

Credential protection

API keys, tokens, passwords, and environment secrets.

Transparent anonymization

Models never see raw sensitive data. Your users still do.

Where this matters

Agent & CLI traffic

Protect repo paths, env values, and prompts

Logs & observability

Keep sensitive data out of logs by default

Existing apps

No rewrites—add protection at the gateway

Private deployments

Run detection in your own environment

Protect AI traffic without changing every client

Attach guardrails to a gateway provider and start anonymizing requests immediately.

Frequently asked questions

No. Sensitive values are replaced before the request is sent upstream. The model only sees placeholders, and original values are restored before the response reaches your application.
No. Responses are returned in the same format your client expects, with original values restored. From your app's perspective, nothing changes.
Out of the box, we detect common personal data such as emails, phone numbers, names, and addresses, plus credentials such as API keys, tokens, passwords, and secrets. Detection is configurable based on your policies.
We use a state-of-the-art transformer model for context-aware detection, paired with our Detection Vault of cryptographic fingerprints from values you have already flagged. The two complement each other: the transformer catches new and contextual matches with high recall, the vault gives instant, deterministic hits for anything seen before. Policies can be tuned per category to fit your use case.
Minimal. Detection and transformation happen inline in the gateway path and are optimized for real-time traffic, including streaming responses.
Yes. Guardrails can run within your deployment boundary or region, so sensitive data is processed inside your infrastructure.
No raw sensitive values are stored in logs by default. Anonymized versions can be used for observability without exposing secrets.
No. Guardrails are applied at the gateway layer, so you can protect existing chat apps, APIs, and agents without rewriting them.