Telemetry & Observability

Enable distributed tracing and metrics collection for agentgateway on Kubernetes using OpenTelemetry and Jaeger.

What you’ll build

In this tutorial, you will:

  1. Set up a local Kubernetes cluster with agentgateway and an LLM backend
  2. Deploy Jaeger for trace collection and visualization
  3. Configure a TrafficPolicy to enable distributed tracing
  4. Send requests and view traces in the Jaeger UI

Before you begin

Make sure you have the following tools installed:

For detailed installation instructions, see the LLM Gateway tutorial.


Step 1: Create a kind cluster

kind create cluster --name agentgateway

Step 2: Install agentgateway

# Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml

# agentgateway CRDs
helm upgrade -i --create-namespace \
  --namespace agentgateway-system \
  --version v2.2.1 agentgateway-crds oci://ghcr.io/kgateway-dev/charts/agentgateway-crds

# Control plane
helm upgrade -i -n agentgateway-system agentgateway oci://ghcr.io/kgateway-dev/charts/agentgateway \
  --version v2.2.1

Step 3: Create a Gateway

kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: agentgateway-proxy
  namespace: agentgateway-system
spec:
  gatewayClassName: agentgateway
  listeners:
  - protocol: HTTP
    port: 80
    name: http
    allowedRoutes:
      namespaces:
        from: All
EOF

Wait for the proxy:

kubectl get deployment agentgateway-proxy -n agentgateway-system

Step 4: Deploy Jaeger

Deploy Jaeger as a trace collector and visualization tool in its own namespace.

kubectl create namespace telemetry

kubectl apply -n telemetry -f- <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jaeger
spec:
  replicas: 1
  selector:
    matchLabels:
      app: jaeger
  template:
    metadata:
      labels:
        app: jaeger
    spec:
      containers:
      - name: jaeger
        image: jaegertracing/all-in-one:latest
        ports:
        - containerPort: 16686
          name: ui
        - containerPort: 4317
          name: otlp-grpc
---
apiVersion: v1
kind: Service
metadata:
  name: jaeger
spec:
  selector:
    app: jaeger
  ports:
  - port: 16686
    targetPort: 16686
    name: ui
  - port: 4317
    targetPort: 4317
    name: otlp-grpc
EOF

Wait for Jaeger to be ready:

kubectl get pods -n telemetry -w

Step 5: Configure tracing with a TrafficPolicy

Create a TrafficPolicy that sends traces to the Jaeger collector.

kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1alpha2
kind: TrafficPolicy
metadata:
  name: tracing
  namespace: agentgateway-system
spec:
  targetRefs:
    - kind: Gateway
      name: agentgateway-proxy
      group: gateway.networking.k8s.io
  frontend:
    tracing:
      backendRef:
        name: jaeger
        namespace: telemetry
        port: 4317
      protocol: GRPC
      randomSampling: "true"
EOF

This policy:

  • Targets the Gateway to apply tracing to all routes
  • Sends traces to Jaeger via OTLP gRPC on port 4317
  • Samples all requests (randomSampling: "true") for development

Step 6: Set up an LLM backend

export OPENAI_API_KEY=<insert your API key>

kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: openai-secret
  namespace: agentgateway-system
type: Opaque
stringData:
  Authorization: $OPENAI_API_KEY
---
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
  name: openai
  namespace: agentgateway-system
spec:
  ai:
    provider:
      openai:
        model: gpt-4.1-nano
  policies:
    auth:
      secretRef:
        name: openai-secret
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: openai
  namespace: agentgateway-system
spec:
  parentRefs:
    - name: agentgateway-proxy
      namespace: agentgateway-system
  rules:
    - backendRefs:
      - name: openai
        namespace: agentgateway-system
        group: agentgateway.dev
        kind: AgentgatewayBackend
EOF

Step 7: Generate traces

Set up port-forwarding for the agentgateway proxy:

kubectl port-forward deployment/agentgateway-proxy -n agentgateway-system 8080:80 &

Send a few requests to generate traces:

curl -s http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-nano",
    "messages": [{"role": "user", "content": "What is OpenTelemetry?"}]
  }' | jq

curl -s http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-nano",
    "messages": [{"role": "user", "content": "What is distributed tracing?"}]
  }' | jq

Step 8: View traces in Jaeger

Set up port-forwarding for the Jaeger UI:

kubectl port-forward -n telemetry svc/jaeger 16686:16686 &

Open the Jaeger UI at http://localhost:16686.

  1. Select agentgateway from the Service dropdown
  2. Click Find Traces
  3. Click on a trace to see the full request flow

You’ll see spans for:

  • The incoming HTTP request
  • LLM provider routing
  • Backend request to OpenAI
  • Response processing

Each span includes details like:

  • Request and response token counts
  • Model information
  • Latency breakdown

Production sampling

For production, use ratio-based sampling to reduce overhead:

frontend:
  tracing:
    backendRef:
      name: otel-collector
      namespace: telemetry
      port: 4317
    protocol: GRPC
    randomSampling: "0.1"  # Sample 10% of traces

Cleanup

kill %1 %2 2>/dev/null
kind delete cluster --name agentgateway

Next steps

Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

↑↓ navigate select esc dismiss

What could be improved?