Is the gateway self-hosted or cloud?
Is the gateway self-hosted or cloud?
Self-hosted. You run the gateway container inside your own network (VPC, on-prem, or even a laptop). Your applications call your gateway, not Guardway. The dashboard at app.guardway.ai is SaaS — it configures and observes gateways but never proxies inference traffic.
Do my prompts or completions ever leave my network?
Do my prompts or completions ever leave my network?
No. The gateway forwards requests directly to the provider you configured (OpenAI, Anthropic, etc.). Prompts, completions, and audit logs stay on the gateway — only aggregate telemetry (token counts, latency, cost) flows to the cloud dashboard. See Privacy and Security.
Which LLM providers does Guardway support?
Which LLM providers does Guardway support?
20+ presets out of the box — OpenAI, Anthropic, Google Gemini, Groq, Mistral, DeepSeek, xAI, Perplexity, Together, Fireworks, OpenRouter, Cohere, AWS Bedrock, Azure OpenAI, HuggingFace TGI, NVIDIA NIM, Ollama, LM Studio, vLLM, plus specialty providers (Voyage, AssemblyAI, ElevenLabs, Fal.ai). Anything else that speaks the OpenAI API works via the Custom preset. See Connect a provider.
What's the difference between the Gateway API keys and the Platform API keys?
What's the difference between the Gateway API keys and the Platform API keys?
Today they are the same keys, issued from Configuration → API keys and valid on every gateway in your organization. Per-gateway scoping is on the roadmap. See API keys.
How do guardrails work?
How do guardrails work?
Guardrails run on the gateway before the request reaches the provider (and can also inspect the response). Built-in checks cover PII, hate speech, prompt injection, keyword lists, and IP allow/block-lists. Detection uses small language models (SLMs) — see Guardrails.
Does Guardway support MCP?
Does Guardway support MCP?
Yes. Register MCP servers with the gateway and assign per-key access rules from Configuration → API keys → MCP tab. Tool calls are logged separately — see MCP and Logs → MCP logs.
Can I run more than one gateway?
Can I run more than one gateway?
Yes. You can register as many gateways as you want — production, staging, per-region, etc. Providers and models are attached per-gateway, so each gateway holds its own credentials and model inventory. Usage dashboards roll up across all of them. See Gateways.
Does the gateway need internet access?
Does the gateway need internet access?
It needs:
- Outbound HTTPS to
api.guardway.ai(control plane). - Outbound HTTPS to the LLM providers you use.
- Inbound access on port 8080 from whoever calls it.
What happens when a provider is down?
What happens when a provider is down?
Routing rules support fallback strategies —
next-priority, lowest-cost, lowest-latency, fail. You can chain fallbacks so a single provider outage doesn’t break your traffic. See Routing.How do budgets and quotas work?
How do budgets and quotas work?
Is there an API for managing gateways programmatically?
Is there an API for managing gateways programmatically?
The public management API is on the roadmap. The gateway itself already exposes a stable OpenAI-compatible inference API today. See API reference.
What browsers does the dashboard support?
What browsers does the dashboard support?
Latest Chrome, Firefox, Safari, and Edge. Internet Explorer is not supported. See Limitations.
How do I report a bug or security issue?
How do I report a bug or security issue?
Bugs: email support@guardway.ai with repro steps. Security: email security@guardway.ai privately — do not open a public issue. See Support.