Setting Up the Environment
Overview
Before building any LLM-powered application, you need to set up your development environment with all the necessary tools. This lesson walks you through installing Python packages, setting up LangChain, generating OpenAI API keys, and configuring your IDE (VS Code) for a smooth development experience.
Learning Objectives
-
Install LangChain and its dependencies
-
Generate and configure OpenAI API keys
-
Set up a clean Python development environment
-
Use VS Code effectively with LangChain projects
Step 1: Create a Python Environment
It's best practice to work in a virtual environment to avoid dependency conflicts.
a. Create a project folder
mkdir langchain-project cd langchain-project
b. Create and activate a virtual environment
# Windows
python -m venv venv
venv\Scripts\activate
# macOS/Linux
python3 -m venv venv
source venv/bin/activate
Step 2: Install Required Packages
Run the following command to install the core dependencies for LangChain development:
pip install langchain openai python-dotenv
Optional (for advanced projects):
pip install faiss-cpu chromadb tiktoken streamlit
Step 3: Get OpenAI API Key
-
Sign in or create an account
-
Click “Create new secret key”
-
Copy and store your key securely
Step 4: Store API Key in a .env File
Create a .env
file in your project root and add:
OPENAI_API_KEY=your-openai-api-key-here
Install python-dotenv
if not already installed:
pip install python-dotenv
Then load the key in your Python code:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv("OPENAI_API_KEY")
Step 5: Set Up VS Code for LangChain
-
Open the project folder in VS Code:
code .
-
Recommended Extensions:
Python (by Microsoft)
Pylance (for autocomplete)
DotEnv (for.env
file support)
Jupyter (for running notebooks) -
Configure Python interpreter:
PressCtrl + Shift + P
Select: Python: Select Interpreter
Choose the one from your virtual environment -
Create a file:
app.py
ornotebook.ipynb
and begin coding
Bonus Tips
-
Use Jupyter notebooks for step-by-step LangChain experiments.
-
Use
requirements.txt
to freeze your environment:
pip freeze > requirements.txt
Mini Task
Create a Python file called
test_llm.py
and use this code to test your setup:
import os
from dotenv import load_dotenv
from langchain.chat_models import ChatOpenAI
load_dotenv()
llm = ChatOpenAI(temperature=0)
response = llm.predict("What is LangChain?")
print(response)