Share

A Step-by-Step Guide to Building an AI Chatbot with LangChain

LangChain is an open-source framework designed to help developers build powerful applications with large language models (LLMs). One of its most compelling use cases is building AI-powered chatbots. In this guide, we’ll walk through how to create a simple yet functional AI chatbot using LangChain, OpenAI, and Streamlit.

Prerequisites

Before we begin, ensure you have the following:

  • Python 3.8+
  • OpenAI API key (or another LLM provider key)
  • Basic understanding of Python
  • Installed dependencies: langchain, openai, streamlit, python-dotenv

Install dependencies:

pip install langchain openai streamlit python-dotenv

Step 1: Project Setup

Create a new directory and set up your environment:

mkdir langchain-chatbot && cd langchain-chatbot
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

Create a .env file to store your API key:

OPENAI_API_KEY=your_openai_api_key_here

Step 2: Initialize LangChain and OpenAI LLM

Create a file called chatbot.py:

import os
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory

load_dotenv()

llm = ChatOpenAI(temperature=0.7)
memory = ConversationBufferMemory()
conversation = ConversationChain(llm=llm, memory=memory)

Step 3: Add Streamlit Interface

Streamlit makes it easy to create a web-based interface. Add this to chatbot.py:

import streamlit as st

st.title("LangChain Chatbot")

if "history" not in st.session_state:
    st.session_state.history = []

user_input = st.text_input("You:", key="input")

if user_input:
    response = conversation.predict(input=user_input)
    st.session_state.history.append((user_input, response))

for user_msg, bot_msg in reversed(st.session_state.history):
    st.markdown(f"**You:** {user_msg}")
    st.markdown(f"**Bot:** {bot_msg}")

Run the app:

streamlit run chatbot.py

Step 4: Customize Behavior (Optional)

Want to add a system prompt or change the model behavior? Modify the ChatOpenAI initialization:

llm = ChatOpenAI(temperature=0.5, model_name="gpt-4")

You can also create a custom system message using LangChain’s PromptTemplate:

from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

prompt = PromptTemplate(input_variables=["input"],
    template="You are a helpful assistant. Answer the following: {input}")
chain = LLMChain(llm=llm, prompt=prompt)

Step 5: Testing & Iteration

  • Try changing the temperature to adjust the creativity of responses
  • Add session-based memory for context
  • Experiment with storing conversation history in a database

What’s Next?

You’ve just created a fully functional chatbot using LangChain and OpenAI! To extend it:

  • Integrate with tools like Pinecone for memory/vector search
  • Add support for voice input using Whisper
  • Deploy your app using platforms like Streamlit Cloud or Heroku

Resources

Conclusion:
LangChain simplifies the process of building intelligent applications by combining LLMs with memory, prompts, and context management. With just a few lines of code, you’ve built a chatbot capable of handling dynamic, conversational tasks. Keep experimenting and enhancing its capabilities!

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *