
Google Gemini with Gradio
- By Bruce Nielson
- ML & AI Specialist
Gradio is an amazing little library that allows you to quickly build a UI for AI Chatbots or other AI related projects. Better yet, you can then either run that app locally or share it out on the web with Gradio hosting it for you. It can also be used with Hugging Face Spaces which will host your Gradio app for free. You’ll be able to show it off to clients! (You can find instructions for Hugging Face Space here and an overview here. But I’m planning to do a future blog post on this useful little tool.)
The Quick Start Guide
Getting Started with Gradio is pretty easy. This quick start guide is the right place to start. Let’s start with installing Gradio:
pip install --upgrade gradio
You are welcome to follow the short tutorial in the quick start guide. I actually found that useful. But let’s do something a bit more difficult for this post. We’re going to write a Gradio chatbot that uses Google Gemini’s API. Here is a well-done tutorial on how to integrate Gemini and Gradio.
It’s going to be a Dungeon Master chatbot that plays role playing games with you. It will look like this when it is working:
Getting Started with Google Gemini
We’ve had blog posts in the past about how to get started with Google Gemini. (Part 1 and Part 2. Plus how to integrate with Haystack.) However, things change fast! So, I’m going to quickly go over the current way to do it, but you may want to look at the past blog posts for a more detailed look.
First, navigate to the Gemini Quick Start Guide for Python. Then install Gemini:
pip install -q -U google-genai
You will need an API key which you can find in Google AI Studio, or you can click here to get instructions.
Creating a Gradio Chatbot
There are some basic instructions for how to create a Gradio chatbot found here. But let me just go over the basics of how to use Gradio with Gemini.
First, we’ll need a way to load the Gemini secret. I like this approach which I’ve used in past posts:
def get_secret(secret_file: str) -> str:
try:
with open(secret_file, 'r') as file:
secret_text: str = file.read().strip()
except FileNotFoundError:
print(f"The file '{secret_file}' does not exist.")
secret_text = ""
except Exception as e:
print(f"An error occurred: {e}")
raise e
return secret_text
Just use that function to read a secret out of a text file.
Next, we’ll need some imports:
import gradio as gr
from google import genai
from google.genai import types
Now we need to create the actual Gemini chatbot that we’re going to use. First, I get the Gemini API key (secret) out of my text file. (Never check in a secret into your repo!) Then I use it to login to Gemini and create a client.
google_secret: str = get_secret(r'D:\Documents\Secrets\gemini_secret.txt')
client = genai.Client(api_key=google_secret)
How to Make a Dungeon Master
The next line is interesting.
config = types.GenerateContentConfig(system_instruction="You are a Dungeon Master that will play a game with me.")
I create a ‘config’ variable that setup a ‘system instruction’ which is instructions that get repeated with each chat exchange so that the chatbot doesn’t wander away from its intended purpose.
In this case, I instruct it to be a Dungeon Master. Keep in mind that the system instructions get repeated for each chat exchange, so it is best to keep it short.
Finally, I create the actual chatbot using gemini-1.5-flash.
chat = client.chats.create(model="gemini-1.5-flash", config=config)
The Gradio Response Function
Now we’re finally ready to create the ‘response’ function that Gradio always uses. By default, it has two parameters: message and history. ‘message’ is the current message the user typed and ‘history’ is the chat history which (as mentioned above) is a list of tuples that contain user messages and the chatbot’s response.
def response(message, history):
global chat
chat_response = chat.send_message(message)
# Each character of the answer is displayed
for i in range(len(chat_response.text)):
yield chat_response.text[: i+1]
Notice that we don’t even use the history. Why? Because the Gemini ‘send_message” method automatically keeps a history. The loop that follows streams the response.
Finally, we need a way to launch the Gradio interface itself:
def main():
demo = gr.ChatInterface(response,
title='RPG Chat',
textbox=gr.Textbox(placeholder="Chat to the Dungeon Master"),
)
demo.launch(debug=True)
if __name__ == "__main__":
main()
gr.ChatInterface creates the demo UI which we then ‘launch’. You can find the final copy of the code here. I added it to the Book Search Archive even though it isn’t integrated yet. I’ll do a post on that once I create a Gradio interface for the Book Search Archive.
Now run this program and you’ll see something like this:
Now run the given localhost link and you’ll see the interface:
Note how it said that if you call launch like this with Share set to True:
demo.launch(share=True)
The demo will instead launch hosted by Gradio and will be available publicly. Unfortunately, I couldn’t get that to work properly with my Gemini code because Microsoft Defender was not a fan of it. I decided it was beyond the scope of this blog post to work out those issues. But the basic idea is that you can theoretically run this Gradio chatbot hosted by Gradio so it’s available on the web.
Conclusions
We are just scratching the surface of how to utilize Gradio as a chatbot / AI interface. And we haven’t even gotten to how to use it with Hugging Face spaces. But this is an exciting open-source library that lets you quickly put together an amazing looking AI demo. In this post, we covered how to integrate Gradio’s UI with Google Gemini’s API.
Be sure to comment down below what you think of Gradio and how you could use it in your future projects.