Deeper agent loops with planning, sub-agents, and persistent memory for longer-running tasks.
Deep Agents is an open source agent harness built by LangChain for constructing AI agents that handle long-running, complex, multi-step tasks. Unlike a basic AI agent that loops through tool calls reactively, a deep agent plans its work upfront, manages context across extended sessions, and delegates subtasks to specialized subagents. It is the architecture behind production systems like Claude Code and Google’s Deep Research, and it is the most practical way for developers today to build agents that behave more like a project-based colleague than a question-answering machine.
A standard AI agent runs a simple loop: receive input, pick a tool, call the tool, return the result. This works fine for short tasks. It fails when the task is long, context grows too large, or different parts of the problem need different types of expertise. Deep Agents solves each of these failure points through four core mechanisms.
write_todos tool to decompose the goal into discrete steps. This is a living to-do list the agent can update as it learns more. Planning before acting is the single biggest reason deep agents outperform shallow ones on complex work.task tool. Each subagent gets its own clean, isolated context so it can go deep on a specific subtask without polluting the parent’s context. Subagents can also run in parallel, cutting total runtime on complex jobs.The whole system is built on LangGraph for durable execution, which means streaming, checkpointing, and human-in-the-loop approval all work out of the box. You can swap model providers, plug in custom tools, and swap filesystem backends depending on your environment.
Deep Agents is for builders who need agents to complete work that feels more like a project than a single question. Here are concrete systems you can ship with it.
LangGraph is the underlying runtime that Deep Agents is built on. LangGraph gives you low-level control to define agent workflows as explicit state graphs. Deep Agents is an opinionated layer on top of it: you get a working agent out of the box with planning, file access, subagents, and context management already wired up. Use LangGraph when you need precise control over deterministic workflows; use Deep Agents when you want to build autonomous agents quickly without building the infrastructure yourself.
Yes. Deep Agents is model agnostic and supports any LLM that handles tool calling, including GPT-4o, Gemini, Llama, Qwen, and others via LangChain's init_chat_model interface. That said, more capable frontier models plan and execute better on complex tasks. For prototyping, fast inference providers like Groq work well. For production, models like Claude Sonnet or GPT-4o tend to produce more reliable multi-step behavior.
Use Deep Agents when your task has three or more of these properties: it requires planning and decomposition rather than a single tool call; the context it generates is too large for one context window; different parts of the work need different types of expertise; and the result needs to persist across sessions. For simple Q&A or single-tool tasks, a basic LangChain agent is fine. Deep Agents carry real overhead, using 10 to 15 times more compute than a simple agent, so they are best reserved for work that genuinely warrants it.
Updates from the AI world — what shipped, what we’re using in production, and what’s worth your attention. Two emails a month, no spam.