Skip to main content

How to use Pydantic AI: Building Type-Safe LLM Applications

4 min read

Pydantic AI is a powerful library that enables you to create AI agents that return structured responses from Large Language Models (LLMs). It’s particularly useful when you need consistent, type-safe outputs from your AI interactions.

Why Pydantic AI?

When working with LLMs, getting structured and validated outputs can be challenging. Pydantic AI solves this by:

  • Creating agents that return structured data using Pydantic models
  • Supporting multiple LLM providers (OpenAI, Google, Anthropic)
  • Providing type safety and validation
  • Enabling tool creation and agent delegation
  • Handling retries and error cases automatically

Installation

You can install Pydantic AI using your preferred package manager:

# Using uv (faster alternative)
uv add pydantic-ai

Basic Usage

Here’s a simple example of creating an agent that provides structured responses:

from pydantic_ai import Agent
from pydantic import BaseModel

class SupportOutput(BaseModel):
    support_advice: str
    block_card: bool
    risk: int

# Create an agent with specific model and output type
support_agent = Agent(
    'openai:gpt-4o',
    output_type=SupportOutput,
    system_prompt=(
        'You are a support agent in our bank, give the '
        'customer support and judge the risk level of their query.'
    ),
)

result = support_agent.run_sync('I just lost my card!')
print(result.output)
# Output: support_advice="I'm sorry to hear that. We are temporarily blocking your card..."
#         block_card=True
#         risk=8

Advanced Features

1. Dependency Injection

Pydantic AI supports dependency injection, allowing you to pass context to your agents:

from dataclasses import dataclass
from pydantic_ai import Agent, RunContext

@dataclass
class SupportDependencies:
    customer_id: int
    db: DatabaseConn

@support_agent.tool
async def customer_balance(
    ctx: RunContext[SupportDependencies],
    include_pending: bool
) -> float:
    """Returns the customer's current account balance."""
    return await ctx.deps.db.customer_balance(
        id=ctx.deps.customer_id,
        include_pending=include_pending,
    )

2. Agent Delegation

You can create multi-agent systems where agents delegate tasks to other agents:

from pydantic_ai import Agent, RunContext
from pydantic_ai.usage import UsageLimits

# Create specialized agents
joke_selection_agent = Agent(
    'openai:gpt-4o',
    system_prompt=(
        'Use the `joke_factory` to generate some jokes, '
        'then choose the best. Return just a single joke.'
    ),
)

joke_generation_agent = Agent(
    'google-gla:gemini-1.5-flash',
    output_type=list[str]
)

# Set up delegation
@joke_selection_agent.tool
async def joke_factory(ctx: RunContext[None], count: int) -> list[str]:
    r = await joke_generation_agent.run(
        f'Please generate {count} jokes.',
        usage=ctx.usage,
    )
    return r.output

result = joke_selection_agent.run_sync(
    'Tell me a joke.',
    usage_limits=UsageLimits(
        request_limit=5,
        total_tokens_limit=300
    ),
)

3. Error Handling and Retries

Pydantic AI provides robust error handling and retry mechanisms:

from pydantic_ai.exceptions import LLMError, UsageLimitExceeded

try:
    result = agent.run_sync(
        'Begin task!',
        usage_limits=UsageLimits(request_limit=3)
    )
except UsageLimitExceeded as e:
    print(f"Usage limit exceeded: {e}")
except LLMError as e:
    print(f"LLM Error: {e}")

Best Practices

  1. Define Clear Output Types

    • Use Pydantic models to specify exact output structures
    • Add field descriptions and validations
    • Keep models focused and specific
  2. Manage Resources

    • Set appropriate usage limits
    • Use dependency injection for external resources
    • Implement proper error handling
  3. Agent Design

    • Break complex tasks into smaller, specialized agents
    • Use agent delegation for complex workflows
    • Provide clear system prompts
  4. Performance

    • Choose appropriate models for different tasks
    • Implement caching when possible
    • Use batch processing for multiple items

Example: Bank Support System

Here’s a complete example of a bank support system:

from pydantic import BaseModel, Field
from pydantic_ai import Agent, RunContext
from dataclasses import dataclass

# Dependencies
@dataclass
class SupportDependencies:
    customer_id: int
    db: DatabaseConn

# Output structure
class SupportOutput(BaseModel):
    support_advice: str = Field(
        description='Advice returned to the customer'
    )
    block_card: bool = Field(
        description="Whether to block the customer's card"
    )
    risk: int = Field(
        description='Risk level of query',
        ge=0, le=10
    )

support_agent = Agent(
    'openai:gpt-4o',
    deps_type=SupportDependencies,
    output_type=SupportOutput,
    system_prompt=(
        'You are a support agent in our bank, give the '
        'customer support and judge the risk level of their query.'
    ),
)

# Add dynamic system prompt
@support_agent.system_prompt
async def add_customer_name(ctx: RunContext[SupportDependencies]) -> str:
    customer_name = await ctx.deps.db.customer_name(id=ctx.deps.customer_id)
    return f"The customer's name is {customer_name!r}"

@support_agent.tool
async def customer_balance(
    ctx: RunContext[SupportDependencies],
    include_pending: bool
) -> float:
    return await ctx.deps.db.customer_balance(
        id=ctx.deps.customer_id,
        include_pending=include_pending,
    )

Conclusion

Pydantic AI is a powerful tool for creating structured AI applications. It combines the flexibility of LLMs with the safety of type checking and validation, making it ideal for production applications.

Key takeaways:

  • Use structured outputs for consistent responses
  • Leverage dependency injection for context
  • Implement proper error handling
  • Use agent delegation for complex tasks
  • Set appropriate usage limits