Skip to main content
From zero to your first request through a Guardway gateway in roughly 10 minutes.

Before you start

  • A machine or VM with Docker installed (see Requirements)
  • An API key from at least one LLM provider (OpenAI, Anthropic, Groq, etc.)
  • Network egress on port 443 to api.guardway.ai and your provider’s API

The six steps

1

Sign up for Guardway

Create your account at app.guardway.ai. You’ll receive an invite email, set a password, and land on the dashboard.Full walkthrough →
2

Deploy a gateway

In the dashboard, open Gateways and click Register New Gateway to generate a one-time registration token. Paste it into the docker run command the dialog gives you and start the container on any host in your network.Full walkthrough →
3

Activate the gateway

Within a minute the gateway sends a heartbeat and appears as Online in the Gateways list. Click Connect to point the dashboard at its local URL.Full walkthrough →
4

Connect a provider

Open Configuration → Providers, pick a preset (OpenAI, Anthropic, Groq, etc.), paste your API key, and attach it to your gateway.Full walkthrough →
5

Sync models

Open Configuration → Models and click Sync Models on the provider. The gateway discovers what’s available and enables them for inference.Full walkthrough →
6

Test in the Playground, then check logs

Open the Playground, pick a model, send a message. Then open Monitoring → Logs to see the request and Monitoring → Traces to see how it flowed through guardrails, routing, and the provider.Playground → · Logs & Traces →

What’s next

Configure guardrails

Turn on PII detection, prompt-injection protection, and content moderation.

Invite your team

Add teammates, set roles, and assign teams to budgets.

Issue API keys

Create scoped API keys for your applications.

Watch usage and spend

Track tokens, cost, and latency per model, team, and key.