AI Prompt Guard
Configure agentgateway to inspect and filter LLM requests, blocking sensitive data like PII before it reaches AI models.
What you’ll build
In this tutorial, you will:
- Set up a local Kubernetes cluster with agentgateway and an LLM backend
- Configure prompt guard policies to block sensitive data
- Test that requests containing SSNs and emails are rejected
- Learn about built-in and custom regex patterns
Before you begin
Make sure you have the following tools installed:
- Docker
- kubectl
- kind
- Helm
- An OpenAI API key (get one at platform.openai.com)
For detailed installation instructions, see the LLM Gateway tutorial.
Step 1: Create a kind cluster
kind create cluster --name agentgatewayStep 2: Install agentgateway
# Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml
# agentgateway CRDs
helm upgrade -i --create-namespace \
--namespace agentgateway-system \
--version v2.2.1 agentgateway-crds oci://ghcr.io/kgateway-dev/charts/agentgateway-crds
# Control plane
helm upgrade -i -n agentgateway-system agentgateway oci://ghcr.io/kgateway-dev/charts/agentgateway \
--version v2.2.1Step 3: Create a Gateway
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: agentgateway-proxy
namespace: agentgateway-system
spec:
gatewayClassName: agentgateway
listeners:
- protocol: HTTP
port: 80
name: http
allowedRoutes:
namespaces:
from: All
EOFWait for the proxy:
kubectl get deployment agentgateway-proxy -n agentgateway-systemStep 4: Set up an LLM backend
export OPENAI_API_KEY=<insert your API key>
kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
name: openai-secret
namespace: agentgateway-system
type: Opaque
stringData:
Authorization: $OPENAI_API_KEY
---
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: openai
namespace: agentgateway-system
spec:
ai:
provider:
openai:
model: gpt-4.1-nano
policies:
auth:
secretRef:
name: openai-secret
ai:
promptGuard:
request:
- regex:
action: Reject
matches:
- "SSN"
- "Social Security"
response:
message: "Request rejected: Contains sensitive information"
- regex:
action: Reject
builtins:
- Email
response:
message: "Request rejected: Contains email address"
EOFThis backend configures:
- An OpenAI LLM provider
- A prompt guard that rejects requests containing SSN references or email addresses
Step 5: Create the HTTPRoute
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: openai
namespace: agentgateway-system
spec:
parentRefs:
- name: agentgateway-proxy
namespace: agentgateway-system
rules:
- backendRefs:
- name: openai
namespace: agentgateway-system
group: agentgateway.dev
kind: AgentgatewayBackend
EOFStep 6: Test the prompt guard
Set up port-forwarding:
kubectl port-forward deployment/agentgateway-proxy -n agentgateway-system 8080:80 &Test a normal request (should succeed)
curl -s http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-nano",
"messages": [{"role": "user", "content": "Hello, how are you?"}]
}' | jqYou should receive a normal response from OpenAI.
Test with SSN mention (should be blocked)
curl -s http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-nano",
"messages": [{"role": "user", "content": "My SSN is 123-45-6789"}]
}'Expected response: the request is rejected before reaching the LLM.
Test with an email address (should be blocked)
curl -s http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4.1-nano",
"messages": [{"role": "user", "content": "Contact me at [email protected]"}]
}'Expected response: the request is rejected before reaching the LLM.
Built-in patterns
Agentgateway includes built-in patterns for common PII types:
| Pattern | Description |
|---|---|
Email |
Email addresses |
Phone |
Phone numbers |
SSN |
Social Security Numbers |
CreditCard |
Credit card numbers |
IPAddress |
IP addresses |
Response filtering
You can also mask sensitive data in LLM responses:
ai:
promptGuard:
response:
- regex:
action: Mask
builtins:
- CreditCardCleanup
kill %1 2>/dev/null
kind delete cluster --name agentgateway