Quick Start
Get up and running with the LLM Gateway in 4 steps.
1. Create an API Key
Go to the LLM Gateway tab and click Create New Key. Select your provider (OpenAI, Anthropic, Azure, Bedrock, Vertex AI, or any OpenAI-compatible endpoint), choose which models to expose, and generate your key.
Your provider API key is stored securely - developers only see the QuilrAI proxy key.
2. Swap the Base URL
Replace your provider's base URL with the QuilrAI gateway URL and use your QuilrAI key. Everything else - SDK, parameters, response format - stays exactly the same.
# Point the client to QuilrAI's gateway
client = OpenAI(
base_url='https://guardrails.quilr.ai/openai_compatible/',
api_key='sk-quilr-xxx'
)
# Everything below stays exactly the same
resp = client.chat.completions.create(
model='gpt-4o',
messages=[{'role': 'user', 'content': 'Hello!'}]
)
Replace sk-quilr-xxx with the API key you created in the dashboard.
3. Configure Your Key
Sane defaults are selected automatically. Change them when setting up the key or edit them later.
| Setting | Description |
|---|---|
| Security Guardrails | PII/PHI/PCI detection, adversarial blocking |
| Rate Limits | Requests per min/hr/day, token budgets |
| Request Routing | Multi-provider load balancing and failover |
| Token Saving | JSON compression, HTML/MD to text |
| Prompt Store | Centralized system prompts |
| Identity Aware | Per-user auth and tracking |
4. Monitor Requests
Every request through the gateway is logged with cost, latency, token counts, and guardrail actions. Check your Logs tab to verify requests are flowing through.
Next step: See the Integration Guide for full code examples with cURL, JavaScript, region options, and more.