CEL reference
The CEL expression context provides the following objects, which you can use in CEL expressions throughout agentgateway configuration.
apiKey
apiKey contains the claims from a verified API Key. This is only present if the API Key policy is enabled.
| Field | Type | Default | Description |
|---|---|---|---|
key | string |
backend
backend contains information about the backend being used.
| Field | Type | Default | Description |
|---|---|---|---|
name | string | The name of the backend being used. For example, my-service or service/my-namespace/my-service:8080. | |
protocol | string | http | The protocol of backend. |
type | string | unknown | The type of backend. |
basicAuth
basicAuth contains the claims from a verified basic authentication Key. This is only present if the Basic authentication policy is enabled.
| Field | Type | Default | Description |
|---|---|---|---|
username | string |
env
env contains selected process environment attributes exposed to CEL.
This does NOT expose raw environment variables, but rather a subset of well-known variables.
| Field | Type | Default | Description |
|---|---|---|---|
gateway | string | The Gateway we are running as (when running on Kubernetes) | |
namespace | string | The namespace of the pod (when running on Kubernetes) | |
podName | string | The name of the pod (when running on Kubernetes) |
extauthz
extauthz contains dynamic metadata from ext_authz filters
extproc
extproc contains dynamic metadata from ext_proc filters
jwt
jwt contains the claims from a verified JWT token. This is only present if the JWT policy is enabled.
llm
llm contains attributes about an LLM request or response. This is only present when using an ai backend.
| Field | Type | Default | Description |
|---|---|---|---|
cacheCreationInputTokens | integer | Tokens written to cache (costs) Not present with OpenAI | |
cachedInputTokens | integer | The number of tokens in the input/prompt read from cache (savings) | |
completion | array | The completion from the LLM. Warning: accessing this has some performance impacts for large responses. | |
countTokens | integer | The number of tokens in the request, when using the token counting endpoint These are not counted as ‘input tokens’ since they do not consume input tokens. | |
inputAudioTokens | integer | The number of audio tokens in the input/prompt. | |
inputImageTokens | integer | The number of image tokens in the input/prompt. | |
inputTextTokens | integer | The number of text tokens in the input/prompt. Note: this field is only set in multi-modal calls where the total token count is split out by text/image/audio; for standard all-text calls, this is unset. | |
inputTokens | integer | The number of tokens in the input/prompt. | |
outputAudioTokens | integer | The number of audio tokens in the output/completion. Note: this field is only set in multi-modal calls where the total token count is split out by text/image/audio; for standard all-text calls, this is unset. | |
outputImageTokens | integer | The number of image tokens in the output/completion. | |
outputTextTokens | integer | The number of text tokens in the output/completion. | |
outputTokens | integer | The number of tokens in the output/completion. | |
params | object | The parameters for the LLM request. | |
prompt | array | The prompt sent to the LLM. Warning: accessing this has some performance impacts for large prompts. | |
provider | string | The provider of the LLM. | |
reasoningTokens | integer | The number of reasoning tokens in the output/completion. | |
requestModel | string | The model requested for the LLM request. This may differ from the actual model used. | |
responseModel | string | The model that actually served the LLM response. | |
serviceTier | string | The service tier the provider served the request under. | |
streaming | boolean | Whether the LLM response is streamed. | |
totalTokens | integer | The total number of tokens for the request. |
llmRequest
llmRequest contains the raw LLM request before processing. This is only present during LLM policies;
policies occurring after the LLM policy, such as logs, will not have this field present even for LLM requests.
mcp
mcp contains attributes about the MCP request.
Request-time CEL only includes identity fields such as tool, prompt, or resource.
Post-request CEL may also include fields like methodName, sessionId, and tool payloads.
| Field | Type | Default | Description |
|---|---|---|---|
methodName | string | ||
prompt | object | ||
resource | object | ||
sessionId | string | ||
tool | object |
metadata
metadata contains values set by transformation metadata expressions.
request
request contains attributes about the incoming HTTP request
| Field | Type | Default | Description |
|---|---|---|---|
body | string | The body of the request. Warning: accessing the body will cause the body to be buffered. | |
endTime | string | The time the request completed | |
headers | object | The headers of the request. | |
host | string | The hostname of the request. For example, example.com. | |
method | string | GET | The HTTP method of the request. For example, GET |
path | string | The path of the request URI. For example, /path. | |
pathAndQuery | string | / | The path and query of the request URI. For example, /path?foo=bar. |
scheme | string | The scheme of the request. For example, https. | |
startTime | string | The time the request started | |
uri | string | / | The complete URI of the request. For example, http://example.com/path. |
version | string | HTTP/1.1 | The version of the request. For example, HTTP/1.1. |
response
response contains attributes about the HTTP response
| Field | Type | Default | Description |
|---|---|---|---|
body | string | The body of the response. Warning: accessing the body will cause the body to be buffered. | |
code | integer | The HTTP status code of the response. | |
headers | object | The headers of the response. |
source
source contains attributes about the source of the request.
| Field | Type | Default | Description |
|---|---|---|---|
address | string | 0.0.0.0 | The IP address of the downstream connection. |
identity | object | The (Istio SPIFFE) identity of the downstream connection, if available. | |
issuer | string | The issuer from the downstream certificate, if available. | |
port | integer | The port of the downstream connection. | |
rawAddress | string | 0.0.0.0 | The original TCP peer IP address of the downstream connection. This can differ from the address when using tunneling protocols like PROXY. |
rawPort | integer | The original TCP peer port of the downstream connection. This can differ from the port when using tunneling protocols like PROXY. | |
subject | string | The subject from the downstream certificate, if available. | |
subjectAltNames | array | The subject alt names from the downstream certificate, if available. | |
subjectCn | string | The CN of the subject from the downstream certificate, if available. | |
unverifiedWorkload | object | The workload context of the downstream connection, resolved from the workload discovery store by source IP. Available when the source pod is known to the controller’s workload discovery store. Fields are nested under unverified to signal that they are derived from the source IP (not cryptographically authenticated). Policy authors should prefer source.identity.* for trust-sensitive checks. |