In this tutorial, we demonstrate how a semi-centralized Animoi-style multi-agent system works by allowing two peer agents to interact directly without a manager or supervisor. We show how a drafter and a critic refine the output through peer-to-peer feedback, reducing coordination overhead while preserving quality. We implement this pattern end-to-end in Colab using Langgraph, focusing on clarity, control flow, and practical execution rather than abstract orchestration theory. check it out full code here.
!pip -q install -U langgraph langchain-openai langchain-core
import os
import json
from getpass import getpass
from typing import TypedDict
from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, END
if not os.environ.get("OPENAI_API_KEY"):
os.environ("OPENAI_API_KEY") = getpass("Enter OPENAI_API_KEY (hidden): ")
MODEL = os.environ.get("OPENAI_MODEL", "gpt-4o-mini")
llm = ChatOpenAI(model=MODEL, temperature=0.2)
We set up the Colab environment by installing the required Langgraph and Langchain packages and securely collecting the OpenAI API key as a hidden input. We initialize the language model that will be shared by all agents while keeping the configuration minimal and reproducible. check it out full code here.
class AnemoiState(TypedDict):
task: str
max_rounds: int
round: int
draft: str
critique: str
agreed: bool
final: str
trace: bool
We define a typed state that serves as a shared communication surface between agents during an interaction. We clearly track tasks, drafts, critiques, contract flags, and iteration counts to keep the flow transparent and debuggable. This situation eliminates the need for a central manager or built-in memory. check it out full code here.
DRAFTER_SYSTEM = """You are Agent A (Drafter) in a peer-to-peer loop.
You write a high-quality solution to the user's task.
If you receive critique, you revise decisively and incorporate it.
Return only the improved draft text."""
def drafter_node(state: AnemoiState) -> AnemoiState:
task = state("task")
critique = state.get("critique", "").strip()
r = state.get("round", 0) + 1
if critique:
user_msg = f"""TASK:
{task}
CRITIQUE:
{critique}
Revise the draft."""
else:
user_msg = f"""TASK:
{task}
Write the first draft."""
draft = llm.invoke(
(
{"role": "system", "content": DRAFTER_SYSTEM},
{"role": "user", "content": user_msg},
)
).content.strip()
if state.get("trace", False):
print(f"n--- Drafter Round {r} ---n{draft}n")
return {**state, "round": r, "draft": draft, "agreed": False}
We implement a drafter agent, which generates the initial response and modifies it whenever peer feedback is available. We keep the drafter focused on improving user-facing drafts, without awareness of control logic or termination conditions. This reflects the Animoi idea of ​​agents optimizing locally given peer signals. check it out full code here.
CRITIC_SYSTEM = """You are Agent B (Critic).
Return strict JSON:
{"agree": true/false, "critique": "..."}"""
def critic_node(state: AnemoiState) -> AnemoiState:
task = state("task")
draft = state.get("draft", "")
raw = llm.invoke(
(
{"role": "system", "content": CRITIC_SYSTEM},
{
"role": "user",
"content": f"TASK:n{task}nnDRAFT:n{draft}",
},
)
).content.strip()
cleaned = raw.strip("```").replace("json", "").strip()
try:
data = json.loads(cleaned)
agree = bool(data.get("agree", False))
critique = str(data.get("critique", "")).strip()
except Exception:
agree = False
critique = raw
if state.get("trace", False):
print(f"--- Critic Decision ---nAGREE: {agree}n{critique}n")
final = draft if agree else state.get("final", "")
return {**state, "agreed": agree, "critique": critique, "final": final}
We implement the Critic Agent, which evaluates the draft and decides whether it is ready for shipment or needs revisions. We implement a strict agree-or-modify decision to avoid ambiguous feedback and ensure fast convergence. This peer assessment step allows quality control without introducing a supervisory agent. check it out full code here.
def continue_or_end(state: AnemoiState) -> str:
if state.get("agreed", False):
return "end"
if state.get("round", 0) >= state.get("max_rounds", 3):
return "force_ship"
return "loop"
def force_ship_node(state: AnemoiState) -> AnemoiState:
return {**state, "final": state.get("final") or state.get("draft", "")}
graph = StateGraph(AnemoiState)
graph.add_node("drafter", drafter_node)
graph.add_node("critic", critic_node)
graph.add_node("force_ship", force_ship_node)
graph.set_entry_point("drafter")
graph.add_edge("drafter", "critic")
graph.add_conditional_edges(
"critic",
continue_or_end,
{"loop": "drafter", "force_ship": "force_ship", "end": END},
)
graph.add_edge("force_ship", END)
anemoi_critic_loop = graph.compile()
demo_task = """Explain the Anemoi semi-centralized agent pattern and why peer-to-peer critic loops reduce bottlenecks."""
result = anemoi_critic_loop.invoke(
{
"task": demo_task,
"max_rounds": 3,
"round": 0,
"draft": "",
"critique": "",
"agreed": False,
"final": "",
"trace": False,
}
)
print("n====================")
print("✅ FINAL OUTPUT")
print("====================n")
print(result("final"))
We assemble a Langgraph workflow that takes control between the drafter and the critic until agreement is reached or the maximum round limit is reached. We rely on simple conditional routing rather than centralized planning, thereby preserving the semi-centralized nature of the system. Finally, we execute the graph and return the best available output to the user.
In conclusion, we demonstrated that Animoi-style peer interaction is a practical alternative to manager-worker architectures, providing low latency, low context bloat, and simple agent coordination. By allowing agents to directly monitor and correct each other, we achieved convergence with fewer tokens and less orchestration complexity. In this tutorial, we have provided a reusable blueprint for building scalable, semi-centralized agent systems. This lays the foundation for extending the pattern to multi-peer mesh, red-team loops, or protocol-based agent interoperability.
check it out full code here. Also, feel free to follow us Twitter And don’t forget to join us 100k+ ml subreddit and subscribe our newsletter. wait! Are you on Telegram? Now you can also connect with us on Telegram.