Content safety and PII protection

Protect LLM requests and responses from sensitive data exposure and harmful content using layered content safety controls.

About

Content safety (also known as PII detection, PII sanitization, or data loss prevention) helps you prevent sensitive information from reaching LLM providers and block harmful content in both requests and responses. Agentgateway provides a layered approach to content safety through prompt guards that can reject, mask, or moderate content before it reaches the LLM or returns to users.

You can layer multiple protection mechanisms to create comprehensive content safety:

  • Regex-based detection: Fast, deterministic matching for known patterns like credit cards, SSNs, emails, and custom patterns
  • External moderation: Leverage cloud provider guardrails for advanced content filtering
  • Custom webhooks: Integrate your own content safety logic for specialized requirements

This guide shows you how to use each layer and combine them for defense-in-depth content protection.

Before you begin

Complete the LLM gateway tutorial to set up agentgateway with an LLM provider.

How content safety works

Agentgateway processes content safety checks in the request and response paths. You can configure multiple prompt guards that run in sequence, allowing you to combine different detection methods.

  sequenceDiagram
    participant Client
    participant Gateway as Agentgateway
    participant Guard as Content Safety Layer
    participant LLM

    Client->>Gateway: Send prompt
    Gateway->>Guard: 1. Regex check (fast)
    Guard-->>Gateway: Pass/Reject/Mask

    alt Passed Regex
        Gateway->>Guard: 2. External moderation (if configured)
        Guard-->>Gateway: Pass/Reject/Mask

        alt Passed Moderation
            Gateway->>Guard: 3. Custom webhook (if configured)
            Guard-->>Gateway: Pass/Reject/Mask

            alt Passed All Guards
                Gateway->>LLM: Forward sanitized request
                LLM-->>Gateway: Generate response
                Gateway->>Guard: Response guards
                Guard-->>Gateway: Pass/Reject/Mask
                Gateway-->>Client: Return sanitized response
            end
        end
    else Rejected
        Gateway-->>Client: Return rejection message
    end

The diagram shows content flowing through multiple guard layers. Each layer can:

  • Pass: Allow content to proceed to the next layer
  • Reject: Block the request and return an error message
  • Mask: Replace sensitive patterns with placeholders and continue

Layer 1: Regex-based detection

Regex-based prompt guards provide fast, deterministic pattern matching for known sensitive data formats. Use this layer for common PII patterns and custom organization-specific strings.

Built-in patterns

Agentgateway includes built-in regex patterns for common sensitive data types:

  • CreditCard: Credit card numbers (Visa, MasterCard, Amex, Discover)
  • Ssn: US Social Security Numbers
  • Email: Email addresses
  • PhoneNumber: US phone numbers
  • CaSin: Canadian Social Insurance Numbers

Example configuration that masks credit cards in responses:

binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - ai:
          name: openai
          provider:
            openAI:
              model: gpt-3.5-turbo
      policies:
        ai:
          promptGuard:
            response:
            - regex:
                builtins:
                - creditCard
                - ssn
                - email
                action: mask
ℹ️
In standalone configuration, builtin names use camelCase: creditCard, ssn, email, phoneNumber, caSin.

Custom patterns

You can also define custom regex patterns for organization-specific sensitive data.

Example that rejects requests containing specific restricted terms:

binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - ai:
          name: openai
          provider:
            openAI:
              model: gpt-3.5-turbo
      policies:
        ai:
          promptGuard:
            request:
            - response:
                message: "Request blocked due to policy violation"
              regex:
                action: reject
                rules:
                - pattern: "confidential"
                - pattern: "internal-only"
                - pattern: "project-\\w+-secret"  # Custom pattern with regex

Test regex guards

Send a request with a fake credit card number and verify it gets masked in the response:

curl http://localhost:3000/v1/chat/completions \
  -H "content-type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "messages": [
      {
        "role": "user",
        "content": "What type of number is 5105105105105100?"
      }
    ]
  }' | jq

Example output showing the credit card masked as <CREDIT_CARD>:

{
  "choices": [
    {
      "message": {
        "content": "<CREDIT_CARD> is an even number."
      }
    }
  ]
}

Layer 2: External moderation endpoints

External moderation endpoints use cloud provider AI services to detect harmful content, hate speech, violence, and other policy violations. These services often use ML models trained specifically for content moderation.

OpenAI Moderation

The OpenAI Moderation API detects potentially harmful content across categories including hate, harassment, self-harm, sexual content, and violence.

  1. Configure the prompt guard to use OpenAI Moderation:

    binds:
    - port: 3000
      listeners:
      - routes:
        - backends:
          - ai:
              name: openai
              provider:
                openAI:
                  model: gpt-4o-mini
          policies:
            ai:
              promptGuard:
                request:
                - openaiModeration:
                    auth:
                      key:
                        file: $HOME/.secrets/openai
                    model: omni-moderation-latest
                  response:
                    message: "Content blocked by moderation policy"
  2. Test with content that triggers moderation:

    curl -i http://localhost:3000/v1/chat/completions \
      -H "content-type: application/json" \
      -d '{
        "model": "gpt-4o-mini",
        "messages": [
          {
            "role": "user",
            "content": "I want to harm myself"
          }
        ]
      }'

    Expected response:

    HTTP/1.1 403 Forbidden
    Content blocked by moderation policy

AWS Bedrock Guardrails

AWS Bedrock Guardrails provide content filtering, PII detection, topic restrictions, and word filters. You must first create a guardrail in the AWS Bedrock console.

ℹ️
For instructions on creating Bedrock Guardrails, see the AWS Bedrock Guardrails documentation.
  1. Get your guardrail identifier and version:

    aws bedrock list-guardrails
  2. Configure the prompt guard:

    binds:
    - port: 3000
      listeners:
      - routes:
        - backends:
          - ai:
              name: openai
              provider:
                openAI:
                  model: gpt-4o-mini
          policies:
            ai:
              promptGuard:
                request:
                - bedrockGuardrails:
                    guardrailIdentifier: your-guardrail-id
                    guardrailVersion: "1"  # or "DRAFT"
                    region: us-west-2
                    policies:
                      backendAuth:
                        aws: {}
                response:
                - bedrockGuardrails:
                    guardrailIdentifier: your-guardrail-id
                    guardrailVersion: "1"
                    region: us-west-2
                    policies:
                      backendAuth:
                        aws: {}
ℹ️
The aws: {} configuration uses the default AWS credential chain (IAM role, environment variables, or instance profile).

Layer 3: Custom webhook integration

For advanced content safety requirements beyond regex and cloud provider services, you can integrate custom webhook servers. This allows you to use specialized ML models, proprietary detection logic, or integrate with existing security tools.

Use cases for custom webhooks

  • Named Entity Recognition (NER) for detecting person names, organizations, locations
  • Industry-specific compliance rules (HIPAA, PCI-DSS, GDPR)
  • Integration with existing DLP or security tools
  • Custom ML models for domain-specific content detection
  • Multi-step validation workflows
  • Advanced contextual analysis

Webhook configuration

Configure a prompt guard to call your webhook service:

binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - ai:
          name: openai
          provider:
            openAI:
              model: gpt-3.5-turbo
      policies:
        ai:
          promptGuard:
            request:
            - webhook:
                protocol: http
                address: content-safety-webhook.example.com:8000
            response:
            - webhook:
                protocol: http
                address: content-safety-webhook.example.com:8000

For details on the webhook protocol and implementing custom webhook servers, see the Guardrail Webhook API documentation.

Combining multiple layers

You can configure multiple prompt guards that run in sequence, creating defense-in-depth protection. Guards are evaluated in the order they appear in the configuration.

Example configuration that uses all three layers:

binds:
- port: 3000
  listeners:
  - routes:
    - backends:
      - ai:
          name: openai
          provider:
            openAI:
              model: gpt-3.5-turbo
      policies:
        ai:
          promptGuard:
            request:
            # Layer 1: Fast regex check for known patterns
            - regex:
                builtins:
                - ssn
                - creditCard
                - email
                action: reject
                response:
                  message: "Request contains PII and cannot be processed"
            # Layer 2: OpenAI moderation for harmful content
            - openaiModeration:
                auth:
                  key:
                    file: $HOME/.secrets/openai
                model: omni-moderation-latest
              response:
                message: "Content blocked by moderation policy"
            # Layer 3: Custom webhook for domain-specific checks
            - webhook:
                protocol: http
                address: content-safety-webhook.example.com:8000
            response:
            # Response guards run in same order
            - regex:
                builtins:
                - ssn
                - creditCard
                action: mask
            - webhook:
                protocol: http
                address: content-safety-webhook.example.com:8000

Choosing the right approach

Use this table to decide which content safety layer to use for your requirements:

RequirementRecommended ApproachReason
Detect known PII formats (SSN, credit cards, emails)Regex with builtinsFast, deterministic, no external dependencies
Block hate speech, violence, harmful contentExternal moderation (OpenAI, Bedrock)ML-based detection trained for content safety
Organization-specific restricted termsRegex with custom patternsSimple pattern matching for known strings
Named entity recognition (people, orgs, places)Custom webhookRequires NER models not available in built-in options
HIPAA, PCI-DSS, or other compliance requirementsLayered approachCombine regex + external moderation + custom validation
Integration with existing DLP toolsCustom webhookAllows reuse of existing security infrastructure
Fastest performance with minimal latencyRegex onlyNo external API calls
Most comprehensive protectionAll three layersDefense-in-depth with multiple detection methods

Performance considerations

Each content safety layer adds latency to requests. Plan your configuration accordingly:

  • Regex guards: < 1ms per check, negligible latency impact
  • External moderation: 50-200ms depending on provider and network latency
  • Custom webhooks: Varies based on webhook implementation and location

To optimize performance:

  • Use regex for fast, deterministic checks before slower external checks
  • Deploy webhook servers in the same region as agentgateway
  • Configure appropriate timeouts for external moderation endpoints
  • Consider request size limits to avoid processing very large prompts

What’s next

Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.