Skip to content

For the complete documentation index, see llms.txt. Markdown versions of all docs pages are available by appending .md to any docs URL.

Page as Markdown

OpenTelemetry

Integrate agentgateway with OpenTelemetry for distributed tracing and metrics

Agentgateway natively supports OpenTelemetry (OTLP) for distributed tracing and metrics export.

Configuration

Enable OpenTelemetry tracing in your agentgateway configuration.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
frontendPolicies:
  tracing:
    otlpEndpoint: http://localhost:4317
    randomSampling: true

Configuration options

SettingDescription
otlpEndpointThe OTLP gRPC endpoint (e.g., http://localhost:4317)
randomSamplingEnable random sampling for traces

With Jaeger

Run Jaeger with OTLP support.

docker run -d --name jaeger \
  -p 16686:16686 \
  -p 4317:4317 \
  jaegertracing/all-in-one:latest

Configure agentgateway.

# yaml-language-server: $schema=https://agentgateway.dev/schema/config
frontendPolicies:
  tracing:
    otlpEndpoint: http://localhost:4317
    randomSampling: true

binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - mcp:
          targets:
          - name: my-server
            stdio:
              cmd: npx
              args: ["@modelcontextprotocol/server-everything"]

View traces at http://localhost:16686.

With OpenTelemetry Collector

For production deployments, use the OpenTelemetry Collector:

# otel-collector-config.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317

processors:
  batch:

exporters:
  jaeger:
    endpoint: jaeger:14250
    tls:
      insecure: true

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: [batch]
      exporters: [jaeger]

Trace attributes

Agentgateway includes the following attributes in traces:

  • http.method - HTTP request method
  • http.url - Request URL
  • http.status_code - Response status code
  • mcp.method - MCP method name (for MCP requests)
  • mcp.session_id - MCP session ID
  • gen_ai.operation.name - AI operation type (for LLM requests)
  • gen_ai.request.model - Requested model
  • gen_ai.usage.input_tokens - Input token count
  • gen_ai.usage.output_tokens - Output token count

Learn more

Telemetry Tutorial

Step-by-step telemetry setup

LLM Observability

AI-specific observability

Was this page helpful?
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.