LLM Gateway
Route requests to OpenAI, Anthropic, Google Gemini, and other LLM providers through a unified OpenAI-compatible API running on Kubernetes.
What you’ll build
In this tutorial, you will:
- Set up a local Kubernetes cluster using kind
- Install the agentgateway control plane with agentgateway
- Create a Gateway and configure an LLM provider backend
- Route requests to your LLM provider through the agentgateway proxy
- Test the setup with curl
Before you begin
Make sure you have the following tools installed on your machine.
Docker
kind runs Kubernetes inside Docker containers. Install Docker Desktop or Docker Engine for your operating system.
# Install Docker Desktop for macOS
# Download from https://www.docker.com/products/docker-desktop/
# Or via Homebrew:
brew install --cask dockerStart Docker Desktop and verify it’s running:
docker version# Install Docker Engine
curl -fsSL https://get.docker.com | sh
sudo systemctl start docker
sudo systemctl enable dockerVerify Docker is running:
docker versionkubectl
# macOS
brew install kubectl
# Linux
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/Verify kubectl is installed:
kubectl version --clientkind
# macOS
brew install kind
# Linux
curl -Lo ./kind https://kind.sigs.k8s.io/dl/latest/kind-linux-amd64
chmod +x ./kind && sudo mv ./kind /usr/local/bin/kindVerify kind is installed:
kind versionHelm
# macOS
brew install helm
# Linux
curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bashVerify Helm is installed:
helm versionStep 1: Create a kind cluster
Create a local Kubernetes cluster with kind. This cluster is where you will install agentgateway.
kind create cluster --name agentgatewayExample output:
Creating cluster "agentgateway" ...
✓ Ensuring node image (kindest/node:v1.32.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
Set kubectl context to "kind-agentgateway"
Verify the cluster is running:
kubectl cluster-info --context kind-agentgateway
kubectl get nodesStep 2: Install the Kubernetes Gateway API CRDs
Install the custom resources for the Kubernetes Gateway API.
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yamlExample output:
customresourcedefinition.apiextensions.k8s.io/gatewayclasses.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/gateways.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/httproutes.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/referencegrants.gateway.networking.k8s.io created
customresourcedefinition.apiextensions.k8s.io/grpcroutes.gateway.networking.k8s.io created
Step 3: Install agentgateway CRDs
Deploy the agentgateway CRDs using Helm. This creates the agentgateway-system namespace and installs the custom resource definitions.
helm upgrade -i --create-namespace \
--namespace agentgateway-system \
--version v2.2.1 agentgateway-crds oci://ghcr.io/kgateway-dev/charts/agentgateway-crdsStep 4: Install the agentgateway control plane
Install the agentgateway control plane with Helm.
helm upgrade -i -n agentgateway-system agentgateway oci://ghcr.io/kgateway-dev/charts/agentgateway \
--version v2.2.1Verify that the control plane is running:
kubectl get pods -n agentgateway-systemExample output:
NAME READY STATUS RESTARTS AGE
agentgateway-78658959cd-cz6jt 1/1 Running 0 12s
Verify that the GatewayClass was created:
kubectl get gatewayclass agentgatewayStep 5: Create a Gateway
Create a Gateway resource that sets up the agentgateway proxy with an HTTP listener.
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: agentgateway-proxy
namespace: agentgateway-system
spec:
gatewayClassName: agentgateway
listeners:
- protocol: HTTP
port: 80
name: http
allowedRoutes:
namespaces:
from: All
EOFWait for the Gateway and its proxy deployment to become ready:
kubectl get gateway agentgateway-proxy -n agentgateway-system
kubectl get deployment agentgateway-proxy -n agentgateway-systemExample output:
NAME CLASS ADDRESS PROGRAMMED AGE
agentgateway-proxy agentgateway True 30s
NAME READY UP-TO-DATE AVAILABLE AGE
agentgateway-proxy 1/1 1 1 32s
Step 6: Choose your LLM provider
Set your API key, create a Kubernetes secret, and configure the LLM backend.
Set your API key
export OPENAI_API_KEY=<insert your API key>Create the Kubernetes secret
kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
name: openai-secret
namespace: agentgateway-system
type: Opaque
stringData:
Authorization: $OPENAI_API_KEY
EOFCreate the LLM backend
kubectl apply -f- <<EOF
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: openai
namespace: agentgateway-system
spec:
ai:
provider:
openai:
model: gpt-4.1-nano
policies:
auth:
secretRef:
name: openai-secret
EOFCreate the HTTPRoute
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: openai
namespace: agentgateway-system
spec:
parentRefs:
- name: agentgateway-proxy
namespace: agentgateway-system
rules:
- backendRefs:
- name: openai
namespace: agentgateway-system
group: agentgateway.dev
kind: AgentgatewayBackend
EOFSet your API key
export ANTHROPIC_API_KEY=<insert your API key>Create the Kubernetes secret
kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
name: anthropic-secret
namespace: agentgateway-system
type: Opaque
stringData:
Authorization: $ANTHROPIC_API_KEY
EOFCreate the LLM backend
kubectl apply -f- <<EOF
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: anthropic
namespace: agentgateway-system
spec:
ai:
provider:
anthropic:
model: claude-haiku-4-5-20251001
policies:
auth:
secretRef:
name: anthropic-secret
EOFCreate the HTTPRoute
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: anthropic
namespace: agentgateway-system
spec:
parentRefs:
- name: agentgateway-proxy
namespace: agentgateway-system
rules:
- backendRefs:
- name: anthropic
namespace: agentgateway-system
group: agentgateway.dev
kind: AgentgatewayBackend
EOFSet your API key
export GEMINI_API_KEY=<insert your API key>Create the Kubernetes secret
kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
name: gemini-secret
namespace: agentgateway-system
type: Opaque
stringData:
Authorization: $GEMINI_API_KEY
EOFCreate the LLM backend
kubectl apply -f- <<EOF
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: gemini
namespace: agentgateway-system
spec:
ai:
provider:
gemini:
model: gemini-2.0-flash
policies:
auth:
secretRef:
name: gemini-secret
EOFCreate the HTTPRoute
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: gemini
namespace: agentgateway-system
spec:
parentRefs:
- name: agentgateway-proxy
namespace: agentgateway-system
rules:
- backendRefs:
- name: gemini
namespace: agentgateway-system
group: agentgateway.dev
kind: AgentgatewayBackend
EOFStep 7: Test the API
Set up port-forwarding to access the agentgateway proxy from your local machine:
kubectl port-forward deployment/agentgateway-proxy -n agentgateway-system 8080:80 &Send a request to the LLM provider through agentgateway:
curl "localhost:8080/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-nano",
"messages": [{"role": "user", "content": "Hello! What is Kubernetes in one sentence?"}]
}' | jqcurl "localhost:8080/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-haiku-4-5-20251001",
"messages": [{"role": "user", "content": "Hello! What is Kubernetes in one sentence?"}]
}' | jqcurl "localhost:8080/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gemini-2.0-flash",
"messages": [{"role": "user", "content": "Hello! What is Kubernetes in one sentence?"}]
}' | jqExample output:
{
"id": "chatcmpl-...",
"object": "chat.completion",
"choices": [{
"message": {
"role": "assistant",
"content": "Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications."
},
"index": 0,
"finish_reason": "stop"
}]
}Cleanup
When you’re done, stop port-forwarding and delete the kind cluster:
# Stop port-forward (if running in background)
kill %1 2>/dev/null
# Delete the kind cluster
kind delete cluster --name agentgateway