Skip to main content

Tutorial Progress

1

Project Setup and Dependencies

10 minutes

2

Create FastAPI Application Structure

15 minutes

3

Integrate LangChain for Text Processing

15 minutes

4

Add API Endpoints and Testing

5 minutes

Overall Progress 0%

Tutorial Info

Difficulty
Beginner
Duration 45 minutes
Reading Time 8 min
Last Updated 2024-01-15
Tutorials Building Your First AI API with FastAPI and LangChain

Building Your First AI API with FastAPI and LangChain

Featured Beginner

Learn how to create a simple AI-powered API using FastAPI and LangChain with step-by-step instructions

What You'll Learn

  • Set up a FastAPI application with proper project structure
  • Integrate LangChain for AI text processing
  • Create API endpoints with Pydantic models
  • Handle errors and validation properly
  • Test your API endpoints

Prerequisites

  • Basic Python knowledge
  • Understanding of REST APIs
  • Python 3.8+ installed

What You'll Build

A working AI-powered API that can process text using LangChain and serve responses through FastAPI endpoints

Overview

In this tutorial, you’ll learn how to build a simple but powerful AI-powered API using FastAPI and LangChain. We’ll create an API that can process text, answer questions, and demonstrate the core concepts of the Pragmatic AI Stack.

By the end of this tutorial, you’ll have a working API that showcases the integration between FastAPI’s high-performance web framework and LangChain’s AI capabilities.

Step 1: Project Setup and Dependencies

Let’s start by setting up our project structure and installing the necessary dependencies.

Create Project Directory

First, create a new directory for your project:

1
2
mkdir fastapi-langchain-tutorial
cd fastapi-langchain-tutorial

Set Up Virtual Environment

Create and activate a virtual environment:

1
2
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

Install Dependencies

Install the required packages:

1
pip install fastapi uvicorn langchain pydantic python-dotenv

Create Requirements File

Save your dependencies:

1
pip freeze > requirements.txt

Expected Output

Your requirements.txt should look similar to this:

1
2
3
4
5
fastapi==0.104.1
uvicorn==0.24.0
langchain==0.0.350
pydantic==2.5.0
python-dotenv==1.0.0

Step 2: Create FastAPI Application Structure

Now let’s create the basic structure for our FastAPI application.

Create Main Application File

Create main.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Optional
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Initialize FastAPI app
app = FastAPI(
    title="AI Text Processor API",
    description="A simple API that processes text using LangChain and FastAPI",
    version="1.0.0"
)

# Pydantic models for request/response
class TextRequest(BaseModel):
    text: str
    max_length: Optional[int] = 100

class TextResponse(BaseModel):
    original_text: str
    processed_text: str
    word_count: int
    character_count: int

# Health check endpoint
@app.get("/")
async def root():
    return {"message": "AI Text Processor API is running!"}

@app.get("/health")
async def health_check():
    return {"status": "healthy", "service": "AI Text Processor API"}

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

Create Environment File

Create .env file for configuration:

1
2
3
4
5
6
7
# API Configuration
API_HOST=0.0.0.0
API_PORT=8000
DEBUG=True

# LangChain Configuration (we'll use this in the next step)
OPENAI_API_KEY=your_openai_api_key_here

Test Basic Setup

Run the application:

1
python main.py

Expected Output

You should see output similar to:

1
2
3
4
INFO:     Started server process [12345]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

Visit http://localhost:8000 in your browser to see the API running.

Step 3: Integrate LangChain for Text Processing

Now let’s add LangChain functionality to process text intelligently.

Update Main Application

Replace the content of main.py with:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import Optional
import os
from dotenv import load_dotenv
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

# Load environment variables
load_dotenv()

# Initialize FastAPI app
app = FastAPI(
    title="AI Text Processor API",
    description="A simple API that processes text using LangChain and FastAPI",
    version="1.0.0"
)

# Initialize LangChain components
llm = OpenAI(temperature=0.7, openai_api_key=os.getenv("OPENAI_API_KEY"))

# Create prompt template
summary_template = """
Please provide a concise summary of the following text in {max_length} words or less:

Text: {text}

Summary:
"""

summary_prompt = PromptTemplate(
    input_variables=["text", "max_length"],
    template=summary_template
)

summary_chain = LLMChain(llm=llm, prompt=summary_prompt)

# Pydantic models
class TextRequest(BaseModel):
    text: str
    max_length: Optional[int] = 50

class SummaryResponse(BaseModel):
    original_text: str
    summary: str
    original_word_count: int
    summary_word_count: int

class QuestionRequest(BaseModel):
    context: str
    question: str

class QuestionResponse(BaseModel):
    context: str
    question: str
    answer: str

# Health check endpoints
@app.get("/")
async def root():
    return {"message": "AI Text Processor API is running!"}

@app.get("/health")
async def health_check():
    return {"status": "healthy", "service": "AI Text Processor API"}

# Text processing endpoints
@app.post("/summarize", response_model=SummaryResponse)
async def summarize_text(request: TextRequest):
    try:
        # Generate summary using LangChain
        summary = summary_chain.run(
            text=request.text,
            max_length=request.max_length
        )
        
        return SummaryResponse(
            original_text=request.text,
            summary=summary.strip(),
            original_word_count=len(request.text.split()),
            summary_word_count=len(summary.strip().split())
        )
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Error processing text: {str(e)}")

@app.post("/question", response_model=QuestionResponse)
async def answer_question(request: QuestionRequest):
    try:
        # Create QA prompt
        qa_template = """
        Based on the following context, please answer the question:
        
        Context: {context}
        
        Question: {question}
        
        Answer:
        """
        
        qa_prompt = PromptTemplate(
            input_variables=["context", "question"],
            template=qa_template
        )
        
        qa_chain = LLMChain(llm=llm, prompt=qa_prompt)
        
        answer = qa_chain.run(
            context=request.context,
            question=request.question
        )
        
        return QuestionResponse(
            context=request.context,
            question=request.question,
            answer=answer.strip()
        )
    except Exception as e:
        raise HTTPException(status_code=500, detail=f"Error answering question: {str(e)}")

if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

Create Mock LangChain Service (for testing without API key)

Create mock_langchain.py for testing without an OpenAI API key:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
"""
Mock LangChain service for testing without API keys
"""

class MockLLM:
    def __init__(self, temperature=0.7, openai_api_key=None):
        self.temperature = temperature
        
    def __call__(self, prompt):
        # Simple mock responses based on prompt content
        if "summary" in prompt.lower():
            return "This is a mock summary of the provided text."
        elif "question" in prompt.lower():
            return "This is a mock answer to your question."
        else:
            return "This is a mock response from the language model."

class MockLLMChain:
    def __init__(self, llm, prompt):
        self.llm = llm
        self.prompt = prompt
        
    def run(self, **kwargs):
        # Generate mock response based on input
        if "text" in kwargs and "max_length" in kwargs:
            text = kwargs["text"]
            max_length = kwargs["max_length"]
            words = text.split()[:max_length//2]  # Rough approximation
            return f"Mock summary: {' '.join(words)}..."
        elif "context" in kwargs and "question" in kwargs:
            return f"Mock answer: Based on the context, the answer relates to the question about {kwargs['question'][:20]}..."
        else:
            return "Mock response generated successfully."

Expected Output

When you run the updated application, you should see the same startup messages, but now with additional endpoints available at:

  • http://localhost:8000/summarize
  • http://localhost:8000/question

Step 4: Add API Endpoints and Testing

Let’s test our API endpoints and add some additional functionality.

Test the API

Create test_api.py:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
import requests
import json

# Base URL for our API
BASE_URL = "http://localhost:8000"

def test_health_check():
    """Test the health check endpoint"""
    response = requests.get(f"{BASE_URL}/health")
    print("Health Check:", response.json())

def test_summarize():
    """Test the text summarization endpoint"""
    data = {
        "text": "Artificial Intelligence (AI) is a branch of computer science that aims to create intelligent machines that work and react like humans. Some of the activities computers with artificial intelligence are designed for include speech recognition, learning, planning, and problem solving. AI has been a subject of intense interest and research for decades, with significant breakthroughs in recent years.",
        "max_length": 30
    }
    
    response = requests.post(f"{BASE_URL}/summarize", json=data)
    print("Summarize Response:", json.dumps(response.json(), indent=2))

def test_question_answering():
    """Test the question answering endpoint"""
    data = {
        "context": "FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints. It was created by Sebastian Ramirez and first released in 2018.",
        "question": "Who created FastAPI?"
    }
    
    response = requests.post(f"{BASE_URL}/question", json=data)
    print("Question Answer Response:", json.dumps(response.json(), indent=2))

if __name__ == "__main__":
    print("Testing AI Text Processor API...")
    print("=" * 50)
    
    test_health_check()
    print()
    
    test_summarize()
    print()
    
    test_question_answering()

Run the Tests

In a new terminal (while your API is running):

1
python test_api.py

Expected Output

You should see output similar to:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
Testing AI Text Processor API...
==================================================
Health Check: {'status': 'healthy', 'service': 'AI Text Processor API'}

Summarize Response: {
  "original_text": "Artificial Intelligence (AI) is a branch...",
  "summary": "Mock summary: Artificial Intelligence (AI) is a branch of computer...",
  "original_word_count": 45,
  "summary_word_count": 12
}

Question Answer Response: {
  "context": "FastAPI is a modern, fast (high-performance)...",
  "question": "Who created FastAPI?",
  "answer": "Mock answer: Based on the context, the answer relates to the question about Who created FastAPI..."
}

Add Interactive API Documentation

FastAPI automatically generates interactive API documentation. Visit:

  • http://localhost:8000/docs - Swagger UI
  • http://localhost:8000/redoc - ReDoc

You can test your API endpoints directly from these interfaces!

Congratulations!

You’ve successfully built your first AI-powered API using FastAPI and LangChain! Your API can now:

  • Process and summarize text
  • Answer questions based on context
  • Provide interactive documentation
  • Handle errors gracefully
  • Follow REST API best practices

The combination of FastAPI’s performance and LangChain’s AI capabilities gives you a solid foundation for building more complex AI applications.

Next Steps

  • Add authentication to your API
  • Implement caching for better performance
  • Deploy your API to production
  • Explore advanced LangChain features

Troubleshooting

Import errors with LangChain

Make sure you have the latest version installed: pip install --upgrade langchain

FastAPI server won't start

Check that port 8000 is not already in use, or specify a different port with --port flag

Related Tutorials