Learn how to build a custom LLM-powered chatbot using LangChain, HuggingFace, and Streamlit – exactly how it was done in CodersDaily’s live workshop in Indore. Step-by-step code included. Start your AI journey today with 100% placement support from CodersDaily.
Last Saturday in Indore, we conducted an exciting hands-on workshop titled "The Art of LLMs", attended by 30+ enthusiastic learners. In this blog, we’ll walk you through the same project we built live — using open-source tools and just three files: backend.py
, app.py
, and a simple text file as a knowledge base.
Project Structure
Here’s what our LLM app looked like:
📂 llm_app/
├── backend.py
├── app.py
└── codersdaily_courses.txt
We created a basic QA system where users could ask questions, and the model would respond using content from a custom text file.
Step 1: Create Your Knowledge Base
Create a file called codersdaily_courses.txt
and add your own custom content. For example:
CodersDaily offers AI, ML, and Web Development courses in Indore.
Each course includes live mentorship, real-world projects, and placement support.
The duration of the AI course is 4 months.
We use Python, TensorFlow, and Scikit-learn in our ML curriculum.
Step 2: backend.py — Load and Prepare the LLM
from langchain.llms import HuggingFacePipeline
from langchain.chains.question_answering import load_qa_chain
from langchain.prompts import PromptTemplate
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.docstore.document import Document
import os
def load_documents(file_path):
with open(file_path, 'r') as f:
content = f.read()
return [Document(page_content=content)]
def create_chain():
from transformers import pipeline
# Load LLM pipeline
pipe = pipeline("text-generation", model="gpt2", max_length=100)
llm = HuggingFacePipeline(pipeline=pipe)
# Load documents
docs = load_documents("codersdaily_courses.txt")
# Embed documents
embeddings = HuggingFaceEmbeddings()
vectorstore = FAISS.from_documents(docs, embeddings)
# Setup chain
prompt_template = """Use the following information to answer the question.
Info: {context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain = load_qa_chain(llm, chain_type="stuff", prompt=prompt)
return chain, vectorstore
Step 3: app.py — Build the Streamlit Interface
import streamlit as st
from backend import create_chain
st.title("🧠 Ask CodersDaily's AI")
query = st.text_input("Ask a question about our courses:")
if query:
chain, vectorstore = create_chain()
docs = vectorstore.similarity_search(query)
response = chain.run(input_documents=docs, question=query)
st.write("💬", response)
Step 4: Run Your App
To launch the app, run this in your terminal:
streamlit run app.py
Ask questions like:
-
"What courses does CodersDaily offer?"
-
"What tech stacks are used?"
-
"Is there placement support?"
And your custom LLM will answer using the contents of your .txt
file!
What You Just Built
- A basic LLM-based chatbot using your own data
- Implemented vector search with FAISS
- Deployed an interactive app with Streamlit
- Used HuggingFace’s GPT-2 model (can be upgraded later)
This project is perfect for getting started with context-based LLM applications, and you can scale it up by plugging in better models and bigger datasets.
Final Thoughts
The energy at the workshop was incredible, and every participant left with a working app and real-world understanding of how to build AI systems using open-source tools.
If you missed this workshop — don't worry. You can follow this tutorial and build it at home!
Learn AI and ML with 100% Placement Support
Whether you're a student or working professional — if you’re serious about building a career in AI, CodersDaily is here to guide you.
CodersDaily is the best AI & ML training institute in Indore, offering:
-
Hands-on training in LLMs, NLP, and Deep Learning
-
Industry-relevant projects
-
100% placement support
-
Experienced mentors from top tech companies
Visit codersdaily.in or follow us on Instagram to join our upcoming batches!
Add a comment: