0% complete
Agent Track Medium-Hard 0.5-1 day

LangGraph Tool Loop

Build a stateful agent workflow with explicit graph structure. This is what Tenzai's Bonzai agent uses.

🎯 The Mission

Tenzai's Bonzai exploiter agent is built on LangGraph. Understanding its core conceptsβ€”state graphs, nodes, edges, and conditional routingβ€”is essential for working on the agent codebase.

Build a LangGraph agent with the same tool-calling loop pattern used in production. Add graceful handling when max iterations are reached.

LangGraph Core Concepts

BASIC TOOL LOOP GRAPH
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚   START     β”‚
    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
           β”‚
           β–Ό
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
    β”‚ call_model  │◄────────────┐
    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜             β”‚
           β”‚                    β”‚
           β–Ό                    β”‚
    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”             β”‚
    β”‚  has_tools? β”‚             β”‚
    β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜             β”‚
           β”‚                    β”‚
      β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”               β”‚
      β”‚         β”‚               β”‚
      β–Ό         β–Ό               β”‚
    [YES]     [NO]              β”‚
      β”‚         β”‚               β”‚
      β–Ό         β–Ό               β”‚
    β”Œβ”€β”€β”€β”€β”€β”  β”Œβ”€β”€β”€β”€β”€β”            β”‚
    β”‚toolsβ”‚  β”‚ END β”‚            β”‚
    β””β”€β”€β”¬β”€β”€β”˜  β””β”€β”€β”€β”€β”€β”˜            β”‚
       β”‚                        β”‚
       β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
            
Key Insight: LangGraph makes the control flow explicit. Instead of implicit loops in your code, you define nodes (functions) and edges (transitions). This makes complex agent behaviors easier to understand, debug, and extend.

What You'll Learn

Implementation Guide

Step 1: Define State

from typing import Annotated
from langgraph.graph.message import add_messages
from pydantic import BaseModel
from langchain_core.messages import BaseMessage

class AgentState(BaseModel):
    # add_messages is a reducer that appends new messages to existing ones
    messages: Annotated[list[BaseMessage], add_messages]
    iteration: int = 0

Step 2: Define Tools

from langchain_core.tools import tool

@tool
def calculator(expression: str) -> str:
    """Evaluate a math expression."""
    try:
        return str(eval(expression))
    except Exception as e:
        return f"Error: {e}"

@tool
def read_file(path: str) -> str:
    """Read contents of a file."""
    try:
        with open(path, 'r') as f:
            return f.read()
    except FileNotFoundError:
        return f"File not found: {path}"

tools = [calculator, read_file]

Step 3: Create Nodes

from langchain_openai import ChatOpenAI
from langchain_core.messages import AIMessage

model = ChatOpenAI(model="gpt-4").bind_tools(tools)

async def call_model(state: AgentState) -> dict:
    """Call the LLM with current messages."""
    response = await model.ainvoke(state.messages)
    return {
        "messages": [response],
        "iteration": state.iteration + 1
    }

def should_continue(state: AgentState) -> str:
    """Determine next step based on last message."""
    last_message = state.messages[-1]
    
    # If the LLM made tool calls, execute them
    if isinstance(last_message, AIMessage) and last_message.tool_calls:
        return "tools"
    
    # Otherwise, we're done
    return "end"

Step 4: Build the Graph

from langgraph.graph import StateGraph, END
from langgraph.prebuilt import ToolNode

def create_workflow():
    # Create tool execution node
    tool_node = ToolNode(tools)
    
    # Build graph
    workflow = StateGraph(AgentState)
    
    # Add nodes
    workflow.add_node("call_model", call_model)
    workflow.add_node("tools", tool_node)
    
    # Set entry point
    workflow.set_entry_point("call_model")
    
    # Add conditional edge from call_model
    workflow.add_conditional_edges(
        "call_model",
        should_continue,
        {
            "tools": "tools",
            "end": END
        }
    )
    
    # After tools, always go back to model
    workflow.add_edge("tools", "call_model")
    
    return workflow.compile()

graph = create_workflow()

Step 5: Run with Recursion Limit

from langchain_core.messages import HumanMessage
from langgraph.errors import GraphRecursionError

async def run_agent(user_input: str, max_steps: int = 10):
    initial_state = {
        "messages": [HumanMessage(content=user_input)],
        "iteration": 0
    }
    
    config = {"recursion_limit": max_steps}
    
    try:
        async for event in graph.astream(initial_state, config=config):
            for node_name, state_update in event.items():
                print(f"[{node_name}] {state_update}")
        
        # Get final state
        final_state = await graph.ainvoke(initial_state, config=config)
        return final_state["messages"][-1].content
        
    except GraphRecursionError:
        return "Reached maximum iterations - task incomplete"

Step 6: Add Graceful Exit (Like Bonzai)

async def graceful_exit_on_max_steps(state: AgentState) -> str:
    """When max iterations hit, ask the model to summarize progress."""
    summary_prompt = """You've reached the maximum number of steps.
    Please summarize:
    1. What you accomplished
    2. What remains incomplete
    3. Any recommendations for next steps"""
    
    messages = state.messages + [HumanMessage(content=summary_prompt)]
    response = await model.ainvoke(messages)
    return response.content

Resources

πŸ“– LangGraph Documentation

Official docs with concepts and tutorials

πŸŽ“ LangGraph Tutorial

Step-by-step introduction to building agents

πŸ’» Example Code

Official examples from the LangGraph repo

πŸ“– Anthropic Agent Patterns

Understanding when to use graph-based agents

βœ“ Success Criteria

πŸ“‹ Progress Checklist

Stretch Goals