For the complete documentation index, see llms.txt. Markdown versions of all docs pages are available by appending .md to any docs URL.
OpenAI
Configuration and setup for OpenAI LLM provider
Configure OpenAI as an LLM provider in agentgateway.
Configuration
Review the following example configuration.# yaml-language-server: $schema=https://agentgateway.dev/schema/config
llm:
models:
- name: "*"
provider: openAI
params:
apiKey: "$OPENAI_API_KEY"| Setting | Description |
|---|---|
name | The model name to match in incoming requests. When a client sends "model": "<name>", the request is routed to this provider. Use * to match any model name. |
provider | The LLM provider, set to openai for OpenAI models. |
params.model | The specific OpenAI model to use. If set, this model is used for all requests. If not set, the request must include the model to use. |
params.apiKey | The OpenAI API key for authentication. You can reference environment variables using the $VAR_NAME syntax. |
binds/listeners/routes configuration format. See the Routing-based configuration guide for more information.Connect to Codex
Use agentgateway as a proxy to your OpenAI provider from the Codex client.
Create an agentgateway configuration without specifying a model, so the Codex client’s model choice is used.
cat > config.yaml << 'EOF' # yaml-language-server: $schema=https://agentgateway.dev/schema/config llm: models: - name: openai provider: openAI params: apiKey: "$OPENAI_API_KEY" EOFPoint Codex at agentgateway through one of the following methods.
Codex uses the OPENAI_BASE_URL environment variable to override the default OpenAI endpoint. Use a base URL that includes
/v1so requests go to/v1/responsesand OpenAI does not return 404.export OPENAI_BASE_URL="http://localhost:4000/v1" codex