Content safety and PII protection
Protect LLM requests and responses from sensitive data exposure and harmful content using layered content safety controls.
About
Content safety (also known as PII detection, PII sanitization, or data loss prevention) helps you prevent sensitive information from reaching LLM providers and block harmful content in both requests and responses. Agentgateway provides a layered approach to content safety through prompt guards that can reject, mask, or moderate content before it reaches the LLM or returns to users.
You can layer multiple protection mechanisms to create comprehensive content safety:
- Regex-based detection: Fast, deterministic matching for known patterns like credit cards, SSNs, emails, and custom patterns
- External moderation: Leverage cloud provider guardrails for advanced content filtering
- Custom webhooks: Integrate your own content safety logic for specialized requirements
This guide shows you how to use each layer and combine them for defense-in-depth content protection.
Before you begin
Complete the LLM gateway tutorial to set up agentgateway with an LLM provider.
How content safety works
Agentgateway processes content safety checks in the request and response paths. You can configure multiple prompt guards that run in sequence, allowing you to combine different detection methods.
sequenceDiagram
participant Client
participant Gateway as Agentgateway
participant Guard as Content Safety Layer
participant LLM
Client->>Gateway: Send prompt
Gateway->>Guard: 1. Regex check (fast)
Guard-->>Gateway: Pass/Reject/Mask
alt Passed Regex
Gateway->>Guard: 2. External moderation (if configured)
Guard-->>Gateway: Pass/Reject/Mask
alt Passed Moderation
Gateway->>Guard: 3. Custom webhook (if configured)
Guard-->>Gateway: Pass/Reject/Mask
alt Passed All Guards
Gateway->>LLM: Forward sanitized request
LLM-->>Gateway: Generate response
Gateway->>Guard: Response guards
Guard-->>Gateway: Pass/Reject/Mask
Gateway-->>Client: Return sanitized response
end
end
else Rejected
Gateway-->>Client: Return rejection message
end
The diagram shows content flowing through multiple guard layers. Each layer can:
- Pass: Allow content to proceed to the next layer
- Reject: Block the request and return an error message
- Mask: Replace sensitive patterns with placeholders and continue
Layer 1: Regex-based detection
Regex-based prompt guards provide fast, deterministic pattern matching for known sensitive data formats. Use this layer for common PII patterns and custom organization-specific strings.
Built-in patterns
Agentgateway includes built-in regex patterns for common sensitive data types:
CreditCard: Credit card numbers (Visa, MasterCard, Amex, Discover)Ssn: US Social Security NumbersEmail: Email addressesPhoneNumber: US phone numbersCaSin: Canadian Social Insurance Numbers
Example configuration that masks credit cards in responses:
binds:
- port: 3000
listeners:
- routes:
- backends:
- ai:
name: openai
provider:
openAI:
model: gpt-3.5-turbo
policies:
ai:
promptGuard:
response:
- regex:
builtins:
- creditCard
- ssn
- email
action: maskcreditCard, ssn, email, phoneNumber, caSin.Custom patterns
You can also define custom regex patterns for organization-specific sensitive data.
Example that rejects requests containing specific restricted terms:
binds:
- port: 3000
listeners:
- routes:
- backends:
- ai:
name: openai
provider:
openAI:
model: gpt-3.5-turbo
policies:
ai:
promptGuard:
request:
- response:
message: "Request blocked due to policy violation"
regex:
action: reject
rules:
- pattern: "confidential"
- pattern: "internal-only"
- pattern: "project-\\w+-secret" # Custom pattern with regexTest regex guards
Send a request with a fake credit card number and verify it gets masked in the response:
curl http://localhost:3000/v1/chat/completions \
-H "content-type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"messages": [
{
"role": "user",
"content": "What type of number is 5105105105105100?"
}
]
}' | jqExample output showing the credit card masked as <CREDIT_CARD>:
{
"choices": [
{
"message": {
"content": "<CREDIT_CARD> is an even number."
}
}
]
}Layer 2: External moderation endpoints
External moderation endpoints use cloud provider AI services to detect harmful content, hate speech, violence, and other policy violations. These services often use ML models trained specifically for content moderation.
OpenAI Moderation
The OpenAI Moderation API detects potentially harmful content across categories including hate, harassment, self-harm, sexual content, and violence.
Configure the prompt guard to use OpenAI Moderation:
binds: - port: 3000 listeners: - routes: - backends: - ai: name: openai provider: openAI: model: gpt-4o-mini policies: ai: promptGuard: request: - openaiModeration: auth: key: file: $HOME/.secrets/openai model: omni-moderation-latest response: message: "Content blocked by moderation policy"Test with content that triggers moderation:
curl -i http://localhost:3000/v1/chat/completions \ -H "content-type: application/json" \ -d '{ "model": "gpt-4o-mini", "messages": [ { "role": "user", "content": "I want to harm myself" } ] }'Expected response:
HTTP/1.1 403 Forbidden Content blocked by moderation policy
AWS Bedrock Guardrails
AWS Bedrock Guardrails provide content filtering, PII detection, topic restrictions, and word filters. You must first create a guardrail in the AWS Bedrock console.
Get your guardrail identifier and version:
aws bedrock list-guardrailsConfigure the prompt guard:
binds: - port: 3000 listeners: - routes: - backends: - ai: name: openai provider: openAI: model: gpt-4o-mini policies: ai: promptGuard: request: - bedrockGuardrails: guardrailIdentifier: your-guardrail-id guardrailVersion: "1" # or "DRAFT" region: us-west-2 policies: backendAuth: aws: {} response: - bedrockGuardrails: guardrailIdentifier: your-guardrail-id guardrailVersion: "1" region: us-west-2 policies: backendAuth: aws: {}
aws: {} configuration uses the default AWS credential chain (IAM role, environment variables, or instance profile).Layer 3: Custom webhook integration
For advanced content safety requirements beyond regex and cloud provider services, you can integrate custom webhook servers. This allows you to use specialized ML models, proprietary detection logic, or integrate with existing security tools.
Use cases for custom webhooks
- Named Entity Recognition (NER) for detecting person names, organizations, locations
- Industry-specific compliance rules (HIPAA, PCI-DSS, GDPR)
- Integration with existing DLP or security tools
- Custom ML models for domain-specific content detection
- Multi-step validation workflows
- Advanced contextual analysis
Webhook configuration
Configure a prompt guard to call your webhook service:
binds:
- port: 3000
listeners:
- routes:
- backends:
- ai:
name: openai
provider:
openAI:
model: gpt-3.5-turbo
policies:
ai:
promptGuard:
request:
- webhook:
protocol: http
address: content-safety-webhook.example.com:8000
response:
- webhook:
protocol: http
address: content-safety-webhook.example.com:8000For details on the webhook protocol and implementing custom webhook servers, see the Guardrail Webhook API documentation.
Combining multiple layers
You can configure multiple prompt guards that run in sequence, creating defense-in-depth protection. Guards are evaluated in the order they appear in the configuration.
Example configuration that uses all three layers:
binds:
- port: 3000
listeners:
- routes:
- backends:
- ai:
name: openai
provider:
openAI:
model: gpt-3.5-turbo
policies:
ai:
promptGuard:
request:
# Layer 1: Fast regex check for known patterns
- regex:
builtins:
- ssn
- creditCard
- email
action: reject
response:
message: "Request contains PII and cannot be processed"
# Layer 2: OpenAI moderation for harmful content
- openaiModeration:
auth:
key:
file: $HOME/.secrets/openai
model: omni-moderation-latest
response:
message: "Content blocked by moderation policy"
# Layer 3: Custom webhook for domain-specific checks
- webhook:
protocol: http
address: content-safety-webhook.example.com:8000
response:
# Response guards run in same order
- regex:
builtins:
- ssn
- creditCard
action: mask
- webhook:
protocol: http
address: content-safety-webhook.example.com:8000Choosing the right approach
Use this table to decide which content safety layer to use for your requirements:
| Requirement | Recommended Approach | Reason |
|---|---|---|
| Detect known PII formats (SSN, credit cards, emails) | Regex with builtins | Fast, deterministic, no external dependencies |
| Block hate speech, violence, harmful content | External moderation (OpenAI, Bedrock) | ML-based detection trained for content safety |
| Organization-specific restricted terms | Regex with custom patterns | Simple pattern matching for known strings |
| Named entity recognition (people, orgs, places) | Custom webhook | Requires NER models not available in built-in options |
| HIPAA, PCI-DSS, or other compliance requirements | Layered approach | Combine regex + external moderation + custom validation |
| Integration with existing DLP tools | Custom webhook | Allows reuse of existing security infrastructure |
| Fastest performance with minimal latency | Regex only | No external API calls |
| Most comprehensive protection | All three layers | Defense-in-depth with multiple detection methods |
Performance considerations
Each content safety layer adds latency to requests. Plan your configuration accordingly:
- Regex guards: < 1ms per check, negligible latency impact
- External moderation: 50-200ms depending on provider and network latency
- Custom webhooks: Varies based on webhook implementation and location
To optimize performance:
- Use regex for fast, deterministic checks before slower external checks
- Deploy webhook servers in the same region as agentgateway
- Configure appropriate timeouts for external moderation endpoints
- Consider request size limits to avoid processing very large prompts
What’s next
- Observe LLM traffic to track content safety metrics and blocked requests