This post will show you how to create a Google Gemini API key and integrate it with LiteLLM. You get quite a few free requests per day.
This is a continuation of this post:
You can go to https://aistudio.google.com/ to get an API Key. Google offers free requests for about all their models:

Google AI Studio Bug
For some reason my project did not show up anymore after a few months. I had to import it using this button and selecting it from the list. I was then able to see my API key again and Usage and Billing.

LiteLLM Bug
There seems to be a bug in LiteLLM when you pass through the api_base environment variable when setting up your Gemini model.
My YAML config looked like this:
model_list:
- model_name: os.environ/GOOGLE_GEMINI_MODEL
litellm_params:
model: os.environ/GOOGLE_GEMINI_MODEL
api_base: os.environ/GOOGLE_GEMINI_API_BASE
api_key: os.environ/GOOGLE_GEMINI_API_KEY
My API base URL is: https://generativelanguage.googleapis.com/
But I would constantly get erros. I just removed the line in bold above because LiteLLM is smart enough to pick up the API base URL from its own config (using the provided model (os.environ/GOOGLE_GEMINI_MODEL) value) if you don’t provide it.
Or perhaps because it is a public model, you should not provide a API base URL…that is how Deepseek and OpenAI works, you don’t provide an API base URL.
Here is my config for Deepseek and OpenAI, none of them have an API base URL.
model_list:
- model_name: openai-gpt-4o
litellm_params:
model: os.environ/OPENAI_MODEL
api_key: os.environ/OPENAI_API_KEY
- model_name: deepseek-reasoner
litellm_params:
model: os.environ/DEEPSEEK_MODEL_REASONER
api_key: os.environ/DEEPSEEK_API_KEY
- model_name: deepseek-chat
litellm_params:
model: os.environ/DEEPSEEK_MODEL_CHAT
api_key: os.environ/DEEPSEEK_API_KEY
- model_name: deepseek-coder
litellm_params:
model: os.environ/DEEPSEEK_MODEL_CODER
api_key: os.environ/DEEPSEEK_API_KEY
- model_name: cloudflare/llama-3-8b-instruct
So perhaps something to remember; if the model is public, don’t provide a base URL.