Prompt guards

Prompt guards are security policies that inspect LLM requests and responses to detect and block harmful, policy-violating, or inappropriate content before it reaches the model or the user. You can apply prompt guards to the request phase, the response phase, or both.

Agentgateway supports the following prompt guard options:

  • Regex filters: Use custom regex patterns or built-in PII detectors to reject requests or mask responses that contain sensitive data such as SSNs, email addresses, or credentials.
  • AWS Bedrock Guardrails: Use AWS-managed guardrail policies to filter content based on topics, words, PII, and other safety criteria.
  • Google Model Armor: Use Google Cloud’s Model Armor service to sanitize user prompts and model responses against configurable safety templates.
Agentgateway assistant

Ask me anything about agentgateway configuration, features, or usage.

Note: AI-generated content might contain errors; please verify and test all returned information.

Tip: one topic per conversation gives the best results. Use the + button in the chat header to start a new conversation.

Switching topics? Starting a new conversation improves accuracy.
↑↓ navigate select esc dismiss

What could be improved?

Your feedback helps us improve assistant answers and identify docs gaps we should fix.

Need more help? Join us on Discord: https://discord.gg/y9efgEmppm

Want to use your own agent? Add the Solo MCP server to query our docs directly. Get started here: https://search.solo.io/.