Skip to main content

Documentation Index

Fetch the complete documentation index at: https://allhandsai-feat-encrypted-secrets-in-transit.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

A ready-to-run example is available here!

Overview

Agent delegation allows a main agent to spawn multiple sub-agents and delegate tasks to them for parallel processing. Each sub-agent runs independently with its own conversation context and returns results that the main agent can consolidate and process further. This pattern is useful when:
  • Breaking down complex problems into independent subtasks
  • Processing multiple related tasks in parallel
  • Separating concerns between different specialized sub-agents
  • Improving throughput for parallelizable work

How It Works

The delegation system consists of two main operations:

1. Spawning Sub-Agents

Before delegating work, the agent must first spawn sub-agents with meaningful identifiers:
# Agent uses the delegate tool to spawn sub-agents
{
    "command": "spawn",
    "ids": ["lodging", "activities"]
}
Each spawned sub-agent:
  • Gets a unique identifier that the agent specify (e.g., “lodging”, “activities”)
  • Inherits the same LLM configuration as the parent agent
  • Operates in the same workspace as the main agent
  • Maintains its own independent conversation context

2. Delegating Tasks

Once sub-agents are spawned, the agent can delegate tasks to them:
# Agent uses the delegate tool to assign tasks
{
    "command": "delegate",
    "tasks": {
        "lodging": "Find the best budget-friendly areas to stay in London",
        "activities": "List top 5 must-see attractions and hidden gems in London"
    }
}
The delegate operation:
  • Runs all sub-agent tasks in parallel using threads
  • Blocks until all sub-agents complete their work
  • Returns a single consolidated observation with all results
  • Handles errors gracefully and reports them per sub-agent

Setting Up the DelegateTool

1

Register the Tool

from openhands.sdk.tool import register_tool
from openhands.tools.delegate import DelegateTool

register_tool("DelegateTool", DelegateTool)
2

Add to Agent Tools

from openhands.sdk import Tool
from openhands.tools.preset.default import get_default_tools

tools = get_default_tools(enable_browser=False)
tools.append(Tool(name="DelegateTool"))

agent = Agent(llm=llm, tools=tools)
3

Configure Maximum Sub-Agents (Optional)

The user can limit the maximum number of concurrent sub-agents:
from openhands.tools.delegate import DelegateTool

class CustomDelegateTool(DelegateTool):
    @classmethod
    def create(cls, conv_state, max_children: int = 3):
        # Only allow up to 3 sub-agents
        return super().create(conv_state, max_children=max_children)

register_tool("DelegateTool", CustomDelegateTool)

Tool Commands

spawn

Initialize sub-agents with meaningful identifiers. Parameters:
  • command: "spawn"
  • ids: List of string identifiers (e.g., ["research", "implementation", "testing"])
Returns: A message indicating the sub-agents were successfully spawned. Example:
{
    "command": "spawn",
    "ids": ["research", "implementation", "testing"]
}

delegate

Send tasks to specific sub-agents and wait for results. Parameters:
  • command: "delegate"
  • tasks: Dictionary mapping sub-agent IDs to task descriptions
Returns: A consolidated message containing all results from the sub-agents. Example:
{
    "command": "delegate",
    "tasks": {
        "research": "Find best practices for async code",
        "implementation": "Refactor the MyClass class",
        "testing": "Write unit tests for the refactored code"
    }
}

Ready-to-run Example

This example is available on GitHub: examples/01_standalone_sdk/25_agent_delegation.py
examples/01_standalone_sdk/25_agent_delegation.py
"""
Agent Delegation Example

This example demonstrates the agent delegation feature where a main agent
delegates tasks to sub-agents for parallel processing.
Each sub-agent runs independently and returns its results to the main agent,
which then merges both analyses into a single consolidated report.
"""

import os

from openhands.sdk import (
    LLM,
    Agent,
    AgentContext,
    Conversation,
    Tool,
    get_logger,
)
from openhands.sdk.context import Skill
from openhands.sdk.subagent import register_agent
from openhands.sdk.tool import register_tool
from openhands.tools import register_builtins_agents
from openhands.tools.delegate import (
    DelegateTool,
    DelegationVisualizer,
)


logger = get_logger(__name__)

# Configure LLM and agent
llm = LLM(
    model=os.getenv("LLM_MODEL", "anthropic/claude-sonnet-4-5-20250929"),
    api_key=os.getenv("LLM_API_KEY"),
    base_url=os.environ.get("LLM_BASE_URL", None),
    usage_id="agent",
)


def create_lodging_planner(llm: LLM) -> Agent:
    """Create a lodging planner focused on London stays."""
    skills = [
        Skill(
            name="lodging_planning",
            content=(
                "You specialize in finding great places to stay in London. "
                "Provide 3-4 hotel recommendations with neighborhoods, quick "
                "pros/cons, "
                "and notes on transit convenience. Keep options varied by budget."
            ),
            trigger=None,
        )
    ]
    return Agent(
        llm=llm,
        tools=[],
        agent_context=AgentContext(
            skills=skills,
            system_message_suffix="Focus only on London lodging recommendations.",
        ),
    )


def create_activities_planner(llm: LLM) -> Agent:
    """Create an activities planner focused on London itineraries."""
    skills = [
        Skill(
            name="activities_planning",
            content=(
                "You design concise London itineraries. Suggest 2-3 daily "
                "highlights, grouped by proximity to minimize travel time. "
                "Include food/coffee stops "
                "and note required tickets/reservations."
            ),
            trigger=None,
        )
    ]
    return Agent(
        llm=llm,
        tools=[],
        agent_context=AgentContext(
            skills=skills,
            system_message_suffix="Plan practical, time-efficient days in London.",
        ),
    )


# Register user-defined agent types (default agent type is always available)
register_agent(
    name="lodging_planner",
    factory_func=create_lodging_planner,
    description="Finds London lodging options with transit-friendly picks.",
)
register_agent(
    name="activities_planner",
    factory_func=create_activities_planner,
    description="Creates time-efficient London activity itineraries.",
)
register_builtins_agents()

# Make the delegation tool available to the main agent
register_tool("DelegateTool", DelegateTool)

main_agent = Agent(
    llm=llm,
    tools=[Tool(name="DelegateTool")],
)
conversation = Conversation(
    agent=main_agent,
    workspace=os.getcwd(),
    visualizer=DelegationVisualizer(name="Delegator"),
)

print("=" * 100)
print("Demonstrating London trip delegation (lodging + activities)...")
print("=" * 100)

conversation.send_message("""
Let's plan a trip to London. I have two specific areas to address:

Lodging: What are the best areas to stay in while keeping a budget in mind?
Activities: What are the top five must-see attractions and hidden gems?

Please use delegation tools to handle these two tasks in parallel.
Ensure the sub-agents use their own internal knowledge and do not
rely on internet access. Keep the responses concise.
Once you have the results, use the bash sub-agent to write a file
named london_trip_report.txt containing the findings in the working directory.
""")
conversation.run()

conversation.send_message(
    "Ask the lodging sub-agent what it thinks about Covent Garden."
)
conversation.run()

# Report cost for user-defined agent types example
cost_user_defined = (
    conversation.conversation_stats.get_combined_metrics().accumulated_cost
)
print(f"EXAMPLE_COST: {cost_user_defined}")

print("All done!")
You can run the example code as-is.
The model name should follow the LiteLLM convention: provider/model_name (e.g., anthropic/claude-sonnet-4-5-20250929, openai/gpt-4o). The LLM_API_KEY should be the API key for your chosen provider.
ChatGPT Plus/Pro subscribers: You can use LLM.subscription_login() to authenticate with your ChatGPT account and access Codex models without consuming API credits. See the LLM Subscriptions guide for details.