Using Azure OpenAI with Open WebUI
This post will cover how to use Azure OpenAI with Open WebUI and LiteLLM (without Ollama) in order to connect to an OpenAI model hosted in Azure AI Studio (or Foundry as it is now known).
Open WebUI Docker Compose
version: '3.8'
services:
openwebui:
image: ghcr.io/open-webui/open-webui:main
container_name: lc-open-webui
restart: unless-stopped
ports:
- "8090:8080"
env_file: stack.env
volumes:
- open-webui:/app/backend/data
networks:
- lc_network
litellm-proxy:
image: ghcr.io/berriai/litellm:main-latest
container_name: lc-litellm-proxy
restart: unless-stopped
ports:
- "4000:4000"
env_file: stack.env
depends_on:
- litellmproxy_db
environment:
DATABASE_URL: postgresql://${POSTGRES_USER}:${POSTGRES_PASSWORD}@litellmproxy_db:5432/${POSTGRES_USER}
command: ["--config", "/app/config.yaml", "--detailed_debug"]
volumes:
- /opt/litellmproxy/litellm_config.yaml:/app/config.yaml
networks:
- lc_network
litellmproxy_db:
image: postgres:17.2-alpine3.21
container_name: lc-postgresql
restart: unless-stopped
env_file: stack.env
shm_size: 96mb
volumes:
- postgresdatalitellmproxy:/var/lib/postgresql/data
networks:
- lc_network
volumes:
postgresdatalitellmproxy:
open-webui:
ollama:
networks:
lc_network:
driver: bridge
You will also need the following environment variables as we are running Portainer:
AZURE_API_BASE=https://xxx-aiservices.openai.azure.com
AZURE_API_KEY=<API KEY. SEE 3 BELOW>
AZURE_API_VERSION=<FROM URL. 2 SEE BELOW>
AZURE_MODEL=<SEE 1 BELOW>
POSTGRES_PASSWORD=<PASSWORD OF YOUR CHOOSING>
POSTGRES_USER=litellm
LITELLM_MASTER_KEY=<ANY RANDOM VALUE STARTING WITH sk->
LITELLM_SALT_KEY=never_change_this_key_123
WEBUI_SECRET_KEY=<PASSWORD OF YOUR CHOOSING>
AZURE_API_BASE
Your Azure OpenAI deploymentURL
AZURE_MODEL
LiteLLM requires the AZURE_MODEL variable to be in the format “azure/DEPLOYMENT_NAME” when using Azure OpenAI deployments. So in our case this value will be “azure/gpt-4o” as can be seen at number 1 in the image.
AZURE_API_KEY
This is just the API key as in the image below at number 3.
AZURE_API_VERSION
If you look at 2 in the image below, you will see the URL contains an API version. Use that API version.
LITELLM_MASTER_KEY
Must start with “sk-” and is the key you will use when logging in as admin and for the REST APIs.
LiteLLM Config
In the docker compose file you will see this line: /opt/litellmproxy/litellm_config.yaml:/app/config.yaml
This means that LiteLLM is looking for the litellm_config.yaml file on the host on which you have docker installed. This is what that file must look like:
model_list:
- model_name: gpt-4o
litellm_params:
model: os.environ/AZURE_MODEL
api_base: os.environ/AZURE_API_BASE
api_key: os.environ/AZURE_API_KEY
api_version: os.environ/AZURE_API_VERSION
This file is used to define all your models.
In your case, the only model we have is gpt-4o, which is our Azure OpenAI model, and we define all the parameters for that model. The parameters pretty much just read the environment variables we defined as part of the docker-compose file.
Endpoints
Several endpoints will be accessible after deploying this compose file (replace the IP and port with your host):
LiteLLM
Swagger Docs | http://192.168.1.109:4000 |
User Interface | http://192.168.1.109:4000/ui (log in with the LITELLM_MASTER_KEY) |
Model Cost Map (not local but important) | https://models.litellm.ai/ |
Model Pricing
In LiteLLM under models you will see the gpt-4o model with pricing. This is the default pricing and is a bit off from the actual pricing.
Here is the Azure pricing for our model: https://azure.microsoft.com/en-us/pricing/details/cognitive-services/openai-service/
The below corresponds with the “Model Version” in the Azure AI Studio screenshot above.
OpenWeb-UI
OpenWeb-UI User Interface | http://192.168.1.109:8090/ |
Link Open WebUI with LiteLLM
We need to tell OpenWeb-UI how to use the gpt-4o model we have defined in LiteLLM:
Click on Admin Panel
Click on Settings, Connections, then the plus Icon to add a new connection
The URL will be the litellm service name as defined in the docker compose file. In our case it is “litellm-proxy” and the internal port is 4000.
The Key is the “sk-” value you defined in the “LITELLM_MASTER_KEY” environment variable. Include the entire key, including “sk-“
Then click the refresh button, and you will see the green message up top.
Open a new chat, and you will see the “gpt-4o” model listed in the dropdown. That little icon tells us it is an external model.