Langchain Releases Deep Agents: A Structured Runtime for Planning, Memory, and Context Isolation in Multi-Step AI Agents

by ai-intensify
0 comments
Langchain Releases Deep Agents: A Structured Runtime for Planning, Memory, and Context Isolation in Multi-Step AI Agents

Most LLM agents work well for small tool-calling loops, but they start to break down when the task becomes multi-step, stateful, and artifact-heavy. langchen’s deep agent Designed for that difference. The project is described by Langchen as ‘Agent Harness‘: A standalone library built on top of Langchain’s agent building blocks and powered by the Langgraph runtime for sustainable execution, streaming, and human-in-the-loop workflows.

Importantly, deep agents do not introduce any new reasoning models or new runtimes beyond LangGraphs. Instead, it packages a set of default and built-in tools around a standard tool-calling loop. The Langchain team positions it as an easy starting point for developers who need agents that can plan, manage larger context, delegate sub-tasks, and pass information across conversations, with the option to move to simpler Langchain agents or custom Langgraph workflows if needed.

What is included by default in deep agents

The Deep Agents GitHub repository lists the main components directly. These include a planning tool called write_todosfile system tools such as read_file, write_file, edit_file, ls, globAnd grepaccess via shell execute With sandboxing, task Tools for generating sub-agents, and built-in context management features such as auto-summarization and saving large outputs to files.

That framing matters because many agents leave system planning, intermediate storage, and subtask delegation to the application developer. Deep agents move those pieces into the default runtime.

planning and work breakdown

Deep Agents include a built-in write_todos Tools for planning and task breakdown. The purpose is clear: the agent can break a complex task into separate steps, track progress, and update the plan as new information emerges.

Without the planning layer, the model improves at each step from the current prompt. with write_todosThe workflow becomes more structured, which is more useful for research tasks, coding sessions, or analysis tasks that unfold in multiple stages.

File system-based reference management

The second main feature is the use of file system tools for context management. These tools allow the agent to load large contexts into storage instead of keeping everything inside the active prompt window. The Langchain team explicitly notes that this helps prevent context window overflow and supports variable-length tool results.

This is a more solid design choice than vague claims about ‘memory’. The agent can write notes, generate code, intermediate reports, or search for output in files and retrieve them later. This makes the system more suitable for longer tasks where the output itself becomes part of the working state.

Deep Agents also support multiple backend types for this virtual file system. Customizable Document List StateBackend, FilesystemBackend, LocalShellBackend, StoreBackendAnd CompositeBackend. By default, the system uses StateBackendWhich stores a short-term file system in Langgraph state for a single thread.

Sub-agent and context isolation

Deep Agents also includes a built-in task Tools for subagent spawning. This tool allows the main agent to create specialized sub-agents for context isolation, keeping the main thread clean while letting the system drill down on specific subtasks.

This is one of the obvious answers to common failure modes in agent systems. Once a thread accumulates too many objectives, tool outputs, and temporal decisions, the quality of the model often drops. Dividing work into sub-agents reduces overload and makes it easier to debug the orchestration path.

Long Term Memory and Langgraph Integration

The Deep Agents GitHub repository also describes long-term memory as a built-in capability. Deep agents can be enhanced with persistent memory across threads using LangGraph’s memory store, allowing the agent to save and retrieve information from previous interactions.

On the implementation side, deep agents live entirely inside the Langgraph execution model. Customization documentation specifies that create_deep_agent(...) return a CompiledStateGraph. The resulting graph can be used with standard LangGraph features such as streaming, studio, and checkpoints.

Deep Agents are not a parallel abstraction layer that blocks access to runtime features; This is a prebuilt graph with defaults.

deployment details

For deployment, the official quickstart shows minimal Python setup: install deepagents As well as a search provider like tavily-pythonExport your model API key and search API key, define a search tool, and then create the agent create_deep_agent(...) Using the tool-calling model. Documents note that deep agents require this tool calling Support, and example workflow to start the agent with your tools system_promptthen run it agent.invoke(...). The Langchain team also points developers toward Langgraph deployment options for production, which fits because deep agents run on the Langgraph runtime and support built-in streaming to view execution.

# pip install -qU deepagents
from deepagents import create_deep_agent

def get_weather(city: str) -> str:
    """Get weather for a given city."""
    return f"It's always sunny in {city}!"

agent = create_deep_agent(
    tools=(get_weather),
    system_prompt="You are a helpful assistant",
)

# Run the agent
agent.invoke(
    {"messages": ({"role": "user", "content": "what is the weather in sf"})}
)

key takeaways

  • Deep Agents is an agent harness built on the Langchain and Langgraph runtimes.
  • This includes the underlying scheme through write_todos Tools for multi-step task decomposition.
  • It uses file system tools to manage large contexts and reduce prompt-window pressure.
  • It can generate sub-agents with isolated context using built-in task tool.
  • It supports persistent memory across all threads through LangGraph’s memory store.

check out repo And docs. Also, feel free to follow us Twitter And don’t forget to join us 120k+ ml subreddit and subscribe our newsletter. wait! Are you on Telegram? Now you can also connect with us on Telegram.


Michael Sutter is a data science professional and holds a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michael excels in transforming complex datasets into actionable insights.

Related Articles

Leave a Comment