Azure AI Foundry
Route LLM requests to Azure AI Foundry (Azure OpenAI) through agentgateway on Kubernetes, using custom path routing and URL rewriting.
What you’ll build
In this tutorial, you will:
- Set up a local Kubernetes cluster with agentgateway
- Configure an Azure OpenAI backend with Azure AI Foundry credentials
- Set up path-based routing with URL rewriting
- Send chat completion requests through agentgateway to Azure OpenAI
Before you begin
Make sure you have the following tools installed:
You also need:
- An Azure account with access to Azure AI Foundry
- An Azure OpenAI deployment with an API key
- Your Azure OpenAI endpoint URL (e.g.,
your-resource.services.ai.azure.com) - Your deployment name (e.g.,
gpt-4o-mini)
For detailed tool installation instructions, see the LLM Gateway tutorial.
Step 1: Create a kind cluster
kind create cluster --name agentgatewayStep 2: Install agentgateway
# Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.4.0/standard-install.yaml
# agentgateway CRDs
helm upgrade -i --create-namespace \
--namespace agentgateway-system \
--version v2.3.0-main agentgateway-crds oci://ghcr.io/kgateway-dev/charts/agentgateway-crds
# Control plane
helm upgrade -i -n agentgateway-system agentgateway oci://ghcr.io/kgateway-dev/charts/agentgateway \
--version v2.3.0-mainStep 3: Create a Gateway
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: agentgateway-proxy
namespace: agentgateway-system
spec:
gatewayClassName: agentgateway
listeners:
- protocol: HTTP
port: 80
name: http
allowedRoutes:
namespaces:
from: All
EOFWait for the proxy:
kubectl get deployment agentgateway-proxy -n agentgateway-systemStep 4: Set your Azure credentials
Export your Azure AI Foundry API key:
export AZURE_FOUNDRY_API_KEY=<insert your Azure API key>Step 5: Create the Kubernetes secret
Store your Azure credentials in a Kubernetes secret:
kubectl apply -f- <<EOF
apiVersion: v1
kind: Secret
metadata:
name: azureopenai-secret
namespace: agentgateway-system
type: Opaque
stringData:
Authorization: $AZURE_FOUNDRY_API_KEY
EOFStep 6: Create the Azure OpenAI backend
Create an AgentgatewayBackend resource configured for Azure OpenAI. Replace your-resource.services.ai.azure.com with your actual Azure AI Foundry endpoint, and gpt-4o-mini with your deployment name.
kubectl apply -f- <<EOF
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: azureopenai
namespace: agentgateway-system
spec:
ai:
provider:
azureopenai:
endpoint: your-resource.services.ai.azure.com
deploymentName: gpt-4o-mini
apiVersion: 2025-01-01-preview
policies:
auth:
secretRef:
name: azureopenai-secret
EOF| Setting | Description |
|---|---|
azureopenai.endpoint |
Your Azure AI Foundry resource endpoint |
azureopenai.deploymentName |
The name of your Azure OpenAI model deployment |
azureopenai.apiVersion |
The Azure OpenAI API version to use |
policies.auth.secretRef |
Reference to the secret containing your API key |
Verify the backend was created:
kubectl get agentgatewaybackend -n agentgateway-systemStep 7: Create the HTTPRoute with URL rewriting
Create an HTTPRoute that routes requests on the /azureopenai path and rewrites the URL to the chat completions endpoint.
kubectl apply -f- <<EOF
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: azureopenai
namespace: agentgateway-system
spec:
parentRefs:
- name: agentgateway-proxy
namespace: agentgateway-system
rules:
- matches:
- path:
type: PathPrefix
value: /azureopenai
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplaceFullPath
replaceFullPath: /v1/chat/completions
backendRefs:
- name: azureopenai
namespace: agentgateway-system
group: agentgateway.dev
kind: AgentgatewayBackend
EOFThis route:
- Matches requests to
/azureopenai - Rewrites the path to
/v1/chat/completionsbefore forwarding to Azure - Routes to the Azure OpenAI backend
Step 8: Test the API
Set up port-forwarding:
kubectl port-forward deployment/agentgateway-proxy -n agentgateway-system 8080:80 &Send a request to Azure OpenAI through agentgateway:
curl "localhost:8080/azureopenai" \
-H "Content-Type: application/json" \
-d '{
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "What is Azure AI Foundry in one sentence?"
}
]
}' | jqExample output:
{
"id": "chatcmpl-...",
"object": "chat.completion",
"choices": [{
"message": {
"role": "assistant",
"content": "Azure AI Foundry is Microsoft's unified platform for building, deploying, and managing AI applications and models at scale."
},
"index": 0,
"finish_reason": "stop"
}]
}Multiple Azure deployments
You can route to different Azure OpenAI deployments based on request paths. Create additional backends and routes for each deployment:
# Backend for a different model
apiVersion: agentgateway.dev/v1alpha1
kind: AgentgatewayBackend
metadata:
name: azureopenai-gpt4
namespace: agentgateway-system
spec:
ai:
provider:
azureopenai:
endpoint: your-resource.services.ai.azure.com
deploymentName: gpt-4
apiVersion: 2025-01-01-preview
policies:
auth:
secretRef:
name: azureopenai-secretThen add an HTTPRoute matching /azureopenai-gpt4 to route to the new backend.
Cleanup
kill %1 2>/dev/null
kind delete cluster --name agentgateway