Table of Contents
Introduction
In this blog, we’ll walk through the process of building a chatbot application using Streamlit and the Google Generative AI model, Gemini-Pro. We’ll cover everything from setting up the environment to integrating the Gemini API and running the chatbot.
Project Setup
Environment Setup
First, we need to set up the project environment. We’ll use the environment.yml
file to create a Conda environment with all necessary dependencies.
Installing Dependencies
To create the Conda environment, run the following command
conda env create -f environment.yml
This command will create a new environment named chatbot-env
with the specified dependencies. Alternatively, you can use requirements.txt
to install dependencies via pip:
pip install -r requirements.txt
Here are the key dependencies required for the project:
python-dotenv
: For managing environment variables.google-generativeai
: The API client for interacting with the Gemini API.streamlit
: The framework for building the web application.
Creating the Chatbot
Project Structure
The project consists of foolowing files:
app.py
: The main Streamlit application.functions.py
: Contains helper functions for interacting with the Gemini API.environment.yml
andrequirements.txt
: Used for setting up the environment.README.md
: Provides an overview and instructions for the project.
Main Application
Let’s start by looking at the main application file, app.py
. This file sets up the Streamlit interface and handles user interactions.
import os
import streamlit as st
from dotenv import load_dotenv
import google.generativeai as gpt
from functions import*
# Load environment variables
load_dotenv()
# Configure Streamlit page settings
st.set_page_config(
page_title="Chat with Gemini-Pro!",
page_icon=":robot_face:", # Favicon emoji
layout="wide", # Page layout option
)
API_KEY = os.getenv("GOOGLE_API_KEY")
# Set up Google Gemini-Pro AI model
gpt.configure(api_key=API_KEY)
model = gpt.GenerativeModel('gemini-pro')
# Initialize chat session in Streamlit if not already present
if "chat_session" not in st.session_state:
st.session_state.chat_session = model.start_chat(history=[])
# Display the chatbot's title on the page
st.title("🤖 Chat with Gemini-Pro")
# Display the chat history
for msg in st.session_state.chat_session.history:
with st.chat_message(map_role(msg["role"])):
st.markdown(msg["content"])
# Input field for user's message
user_input = st.chat_input("Ask Gemini-Pro...")
if user_input:
# Add user's message to chat and display it
st.chat_message("user").markdown(user_input)
# Send user's message to Gemini and get the response
gemini_response = fetch_gemini_response(user_input)
# Display Gemini's response
with st.chat_message("assistant"):
st.markdown(gemini_response)
# Add user and assistant messages to the chat history
st.session_state.chat_session.history.append({"role": "user", "content": user_input})
st.session_state.chat_session.history.append({"role": "model", "content": gemini_response})
This code initializes the Streamlit app, sets up a form for user input, and displays chat messages.
Helper Functions
The helper functions are defined in functions.py
. These functions interact with the Gemini API to get responses based on user input.
import streamlit as st
# Function to translate roles between Gemini and Streamlit terminology
def map_role(role):
if role == "model":
return "assistant"
else:
return role
def fetch_gemini_response(user_query):
# Use the session's model to generate a response
response = st.session_state.chat_session.model.generate_content(user_query)
print(f"Gemini's Response: {response}")
return response.parts[0].text
Integrating Gemini API
To integrate the Gemini API, ensure you have set up the API key in a .env
file:
GEMINI_API_KEY=your_api_key_here
Get the api key from google aistudio:https://aistudio.google.com/app/apikey
Running the Application
To run the application, use the following command:
streamlit run app.py
This will start the Streamlit server and open the chatbot interface in your browser. You can now interact with the chatbot and see responses generated by the Gemini API.
Conclusion
In this blog, we covered the steps to build a chatbot using Streamlit and the Gemini API. We set up the environment, created the main application, defined helper functions, and integrated the Gemini API. By following these steps, you can create your own chatbot application with ease.
Feel free to explore and enhance the chatbot with additional features and improvements. Happy coding!