Skip to main content
Components LLM Framework

LangChain

Core Stack

Framework for developing applications powered by language models

Version
0.1.0
Last Updated
2024-01-10
Difficulty
Intermediate
Reading Time
3 min

LangChain

LangChain is a framework for developing applications powered by language models. It enables applications that are context-aware and can reason about their environment.

Key Features

  • Comprehensive LLM Integration: Support for multiple LLM providers
  • Chain and Agent Abstractions: Build complex workflows with simple components
  • Memory Management: Maintain context across conversations
  • RAG Support: Built-in retrieval-augmented generation capabilities
  • Large Ecosystem: Extensive integrations with external tools and services

Installation

1
pip install langchain

Quick Start

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Initialize LLM
llm = OpenAI(temperature=0.7)

# Create a prompt template
prompt = PromptTemplate(
    input_variables=["topic"],
    template="Write a short poem about {topic}."
)

# Create a chain
chain = LLMChain(llm=llm, prompt=prompt)

# Run the chain
result = chain.run("artificial intelligence")
print(result)

Core Concepts

Chains

Chains are the building blocks of LangChain applications:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
from langchain.chains import SimpleSequentialChain
from langchain.chains import LLMChain

# First chain: Generate a topic
first_prompt = PromptTemplate(
    input_variables=["subject"],
    template="Give me a specific topic about {subject}"
)
first_chain = LLMChain(llm=llm, prompt=first_prompt)

# Second chain: Write about the topic
second_prompt = PromptTemplate(
    input_variables=["topic"],
    template="Write a detailed explanation about {topic}"
)
second_chain = LLMChain(llm=llm, prompt=second_prompt)

# Combine chains
overall_chain = SimpleSequentialChain(
    chains=[first_chain, second_chain],
    verbose=True
)

result = overall_chain.run("machine learning")

Memory

Maintain conversation context:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain

memory = ConversationBufferMemory()
conversation = ConversationChain(
    llm=llm,
    memory=memory,
    verbose=True
)

conversation.predict(input="Hi there!")
conversation.predict(input="What's my name?")

Agents

Create autonomous agents that can use tools:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType

def search_tool(query):
    # Your search implementation
    return f"Search results for: {query}"

tools = [
    Tool(
        name="Search",
        func=search_tool,
        description="Useful for searching information"
    )
]

agent = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True
)

agent.run("What's the weather like today?")

Use Cases

  • Chatbots and Conversational AI: Build sophisticated chat interfaces
  • Document Q&A Systems: Query documents with natural language
  • AI Agents and Workflows: Create autonomous AI systems
  • RAG Applications: Combine retrieval with generation for better answers

Best Practices

  1. Start Simple: Begin with basic chains before building complex workflows
  2. Use Memory Wisely: Choose the right memory type for your use case
  3. Monitor Token Usage: Keep track of API costs and token consumption
  4. Error Handling: Implement robust error handling for LLM calls
  5. Testing: Test your chains with various inputs and edge cases

Common Patterns

RAG (Retrieval-Augmented Generation)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader

# Load documents
loader = TextLoader("documents.txt")
documents = loader.load()

# Create embeddings and vector store
embeddings = OpenAIEmbeddings()
vectorstore = FAISS.from_documents(documents, embeddings)

# Create QA chain
qa_chain = RetrievalQA.from_chain_type(
    llm=llm,
    chain_type="stuff",
    retriever=vectorstore.as_retriever()
)

result = qa_chain.run("What is the main topic of the documents?")

Custom Tools

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
from langchain.tools import BaseTool
from typing import Optional

class CustomCalculatorTool(BaseTool):
    name = "Calculator"
    description = "Useful for mathematical calculations"
    
    def _run(self, query: str) -> str:
        try:
            return str(eval(query))
        except:
            return "Invalid calculation"
    
    async def _arun(self, query: str) -> str:
        raise NotImplementedError("Async not implemented")

Resources

Alternatives

LlamaIndex

Data framework for LLM applications

Key Strengths:
• Excellent for RAG applications
• Strong indexing capabilities
Best For:
• Document Q&A systems
• Knowledge base applications
Intermediate

Haystack

End-to-end NLP framework

Key Strengths:
• Production-ready pipelines
• Strong search capabilities
Best For:
• Enterprise search
• Production NLP pipelines
Advanced

Quick Decision Guide

Choose LangChain for the recommended stack with proven patterns and comprehensive support.
Choose LlamaIndex if you need document q&a systems or similar specialized requirements.
Choose Haystack if you need enterprise search or similar specialized requirements.