Over the past year, AI developers have relied on the React (Reasoning + Acting) pattern – a simple loop where an AI thinks, chooses a tool, and executes. But as any software engineer who has tried to move these agents into production knows, simple loops are brittle. They hallucinate, they lose track of complex goals, and they struggle with ‘tool noise’ when faced with too many APIs.
Composition The team is moving the goalposts by open-sourcing agent orchestrator. This framework is designed to transform the industry from ‘agentic loops’ to ‘agentic workflows’ – structured, stateful and verifiable systems that make AI agents behave like trusted software modules and not like unpredictable chatbots.

Architecture: planner vs executor
The main philosophy behind Agent Orchestrator is strict separation of concerns. In the traditional setup, LLMs are expected to plan the strategy and execute the technical details together. This often leads to ‘greedy’ decision making where the model omits important steps.
Composio’s Orchestrator introduces a dual-layered architecture:
- Planner: This layer is responsible for functional decomposition. It takes a high-level objective – like ‘Find all high-priority GitHub issues and summarize them in a notion page’ – and breaks it down into a sequence of verifiable subtasks.
- Executor: This layer handles the actual interactions with the tool. By separating execution, the system can use special signals or even different models to do the heavy lifting of API interactions without cluttering the global planning logic.
Solution to the ‘equipment noise’ problem
The most significant constraint on agent performance is often the context window. If you give an agent access to 100 tools, the documentation of those tools consumes thousands of tokens, confusing the model and increasing the likelihood of hallucinating parameters.
Agent Orchestrator solves this Managed Toolset. Instead of exposing each capability at once, the orchestrator dynamically delivers only the required tool definitions to the agent based on the current step in the workflow. This ‘just-in-time’ context management ensures that the LLM maintains a high signal-to-noise ratio, leading to a significantly higher success rate in function calling.
State management and observability
One of the most frustrating aspects of early-level AI engineering is the ‘black box’ nature of agents. When an agent fails, it is often difficult to tell whether the failure was caused by poor planning, a failed API call, or a lost context.
Introduces Agent Orchestrator Stateful Orchestration. Unlike stateless loops, which effectively ‘restart’ or rely on dirty chat history for every iteration, Orchestrator maintains a structured state machine.
- resilience: If a tool call fails (for example, a 500 error from a third-party API), the orchestrator can trigger a specific error-handling branch without crashing the entire workflow.
- Traceability: Every decision point is logged from initial planning to final execution. This provides the level of observability needed to debug production-grade software.
key takeaways
- De-coupling scheme from execution: By separating the framework moves away from the simple ‘reason + act’ loop planner (which decomposes goals into subtasks) Executor (which handles the API calls). This reduces ‘greedy’ decision making and improves task accuracy.
- Dynamic Tool Routing (Context Management): To prevent LLM ‘noise’ and hallucinations, the orchestrator only feeds tool definitions relevant to the current task into the model. This ‘just-in-time’ context management ensures high signal-to-noise ratio even when managing 100+ APIs.
- Centralized Stateful Orchestration: Unlike stateless agents, which rely on unstructured chat history, Orchestrator keeps a structured state machine. It allows ‘restart on failure’ capabilities and provides a clear audit trail for debugging production-grade AI.
- Built-in error recovery and resilience: The framework introduces structured ‘correction loops’. If a tool call fails or returns an error (such as a 404 or 500), the orchestrator can trigger specific recovery logic without losing the progress of the entire mission.
check it out GitHub repo And technical details. Also, feel free to follow us Twitter And don’t forget to join us 100k+ ml subreddit and subscribe our newsletter. wait! Are you on Telegram? Now you can also connect with us on Telegram.

