AI Prompt Guard

Configure agentgateway to inspect and filter LLM requests, blocking sensitive data like PII before it reaches AI models.

What you’ll build

In this tutorial, you will:

  1. Set up a local Kubernetes cluster with agentgateway and an LLM backend
  2. Configure prompt guard policies to block sensitive data
  3. Test that requests containing SSNs and emails are rejected
  4. Learn about built-in and custom regex patterns

Before you begin

Make sure you have the following tools installed:

For detailed installation instructions, see the LLM Gateway tutorial.


Step 1: Create a kind cluster

kind create cluster --name agentgateway

Step 2: Install agentgateway

# Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml

# agentgateway CRDs
helm upgrade -i --create-namespace \
  --namespace agentgateway-system \
  --version v2.2.1 agentgateway-crds oci://ghcr.io/kgateway-dev/charts/agentgateway-crds

# Control plane
helm upgrade -i -n agentgateway-system agentgateway oci://ghcr.io/kgateway-dev/charts/agentgateway \
  --version v2.2.1

Step 3: Create a Gateway

kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: agentgateway-proxy
  namespace: agentgateway-system
spec:
  gatewayClassName: agentgateway
  listeners:
  - protocol: HTTP
    port: 80
    name: http
    allowedRoutes:
      namespaces:
        from: All
EOF

Wait for the proxy:

kubectl get deployment agentgateway-proxy -n agentgateway-system

Step 4: Set up an LLM backend

export OPENAI_API_KEY=<insert your API key>

kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: openai-secret
  namespace: agentgateway-system
type: Opaque
stringData:
  Authorization: $OPENAI_API_KEY
---
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
  name: openai
  namespace: agentgateway-system
spec:
  ai:
    provider:
      openai:
        model: gpt-4.1-nano
  policies:
    auth:
      secretRef:
        name: openai-secret
    ai:
      promptGuard:
        request:
        - regex:
            action: Reject
            matches:
            - "SSN"
            - "Social Security"
          response:
            message: "Request rejected: Contains sensitive information"
        - regex:
            action: Reject
            builtins:
            - Email
          response:
            message: "Request rejected: Contains email address"
EOF

This backend configures:

  • An OpenAI LLM provider
  • A prompt guard that rejects requests containing SSN references or email addresses

Step 5: Create the HTTPRoute

kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: openai
  namespace: agentgateway-system
spec:
  parentRefs:
    - name: agentgateway-proxy
      namespace: agentgateway-system
  rules:
    - backendRefs:
      - name: openai
        namespace: agentgateway-system
        group: agentgateway.dev
        kind: AgentgatewayBackend
EOF

Step 6: Test the prompt guard

Set up port-forwarding:

kubectl port-forward deployment/agentgateway-proxy -n agentgateway-system 8080:80 &

Test a normal request (should succeed)

curl -s http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-nano",
    "messages": [{"role": "user", "content": "Hello, how are you?"}]
  }' | jq

You should receive a normal response from OpenAI.

Test with SSN mention (should be blocked)

curl -s http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-nano",
    "messages": [{"role": "user", "content": "My SSN is 123-45-6789"}]
  }'

Expected response: the request is rejected before reaching the LLM.

Test with an email address (should be blocked)

curl -s http://localhost:8080/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4.1-nano",
    "messages": [{"role": "user", "content": "Contact me at [email protected]"}]
  }'

Expected response: the request is rejected before reaching the LLM.


Built-in patterns

Agentgateway includes built-in patterns for common PII types:

Pattern Description
Email Email addresses
Phone Phone numbers
SSN Social Security Numbers
CreditCard Credit card numbers
IPAddress IP addresses

Response filtering

You can also mask sensitive data in LLM responses:

ai:
  promptGuard:
    response:
    - regex:
        action: Mask
        builtins:
        - CreditCard

Cleanup

kill %1 2>/dev/null
kind delete cluster --name agentgateway

Next steps

Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

↑↓ navigate select esc dismiss

What could be improved?