Author(s): michaelczarnecki
Originally published on Towards AI.
hello. In this part I will run and demonstrate publishing agent – A system that automatically gathers news from the web, summarizes the most important points, and then produces an article based on that content.
we will make it langgraphso as a state graph: with nodes, edges, positions and a loop. You’ll see how to combine planning, tool calls, and critic feedback into a coherent workflow.
What will our “publisher” actually do?
Think of it as a simplified editorial team.
The agent will first try to gather material – he will call up a search tool – and once he decides he has enough reliable information, he will proceed to write the article. Finally it will invite a critic to respond. Based on that feedback the article will be improved and finalized.
Planning and quality criteria are key
It is worth highlighting planning and quality criteria, as this is at the heart of agentic applications.
The agent does not go from A to Z blindly. It plans: chooses a mode, evaluates intermediate results, and adopts its strategy. For example, if the first query returns two average results, it will try a different phrase or search a different source.
specific challenges
Some challenges arise almost every time:
1) freshness
Not every tool returns the latest content, so it’s smart to specify a date range in prompts and search parameters.
2) stability
Sometimes a tool fails or returns an empty list. The agent should fallback and try again.
3) Hallucinations
That’s why the first step should not be “inventing something”, but rather “making a summary from specific results and incorporating sources”.
4) Costs and limitations
In long sessions it is advisable to limit conversation history and use sensible model settings – so that the bill does not explode.
what will we make
Three-stage publisher graph:
- researcher Interrogates Tavili and returns grounded notes.
- Author Writes (or rewrites) a draft of the article.
- reviewer evaluates either the draft approve of or asking the author Repeat (Looping once again).
- A reversion limit Describe the total steps to avoid a runaway loop.
Hands-on: Publisher Workflow
1) Install the library
%%capture --no-stderr
%pip install -U langchain_core langchain_openai langgraph langgraph-supervisor langchain-tavily "langchain(openai)"
!apt install libgraphviz-dev
!pip install pygraphviz
- to install Langchen Corner, langgraph (graph engine), langgraph-observer assistant, and Langchen-Tavilly for research.
-
"(openai)"Pulls additional OpenAI chat clients (swap providers as needed). - we like it low level Stack for maximum flexibility (custom positions, edges and stops).
2) Load environment variables
from dotenv import load_dotenv
load_dotenv()
load api key from .env (As, OPENAI_API_KEY, TAVILY_API_KEYKeep secrets out of source control.
3) Connect to Tavili Search API
from langchain_tavily import TavilySearchweb_search = TavilySearch(max_results=3)
web_search_results = web_search.invoke("Who is the prime minister of Poland?")
print(web_search_results("results")(0)("content"))
Output:
The incumbent and eighteenth prime minister is Donald Tusk of the Civic Platform party who replaced Mateusz Morawiecki following the 2023 Polish parliamentary
create a tavilysearch tools and execute a query; Returns concisely structured JSON content Field—Ideal for LLM consumption. it is Officer Longchain integration for Tavili.
4) Pretty-Print Assistant
(You can skip this section unless you find it interesting somehow 😉
from langchain_core.messages import convert_to_messagesdef pretty_print_message(message, indent=False):
pretty_message = message.pretty_repr(html=True)
if not indent:
print(pretty_message)
return
indented = "n".join("t" + c for c in pretty_message.split("n"))
print(indented)
def pretty_print_messages(update, last_message=False):
is_subgraph = False
if isinstance(update, tuple):
ns, update = update
# skip parent graph updates in the printouts
if len(ns) == 0:
return
graph_id = ns(-1).split(":")(0)
print(f"Update from subgraph {graph_id}:")
print("n")
is_subgraph = True
for node_name, node_update in update.items():
update_label = f"Update from node {node_name}:"
if is_subgraph:
update_label = "t" + update_label
print(update_label)
print("n")
messages = convert_to_messages(node_update("messages"))
if last_message:
messages = messages(-1:)
for m in messages:
pretty_print_message(m, indent=is_subgraph)
print("n")
def _name_and_content(msg):
if isinstance(msg, dict):
return msg.get("name", msg.get("role", "user")), msg.get("content", str(msg))
if isinstance(msg, BaseMessage):
name = getattr(msg, "name", None) or getattr(msg, "type", "user")
return name, msg.content
return "user", str(msg)
Add convenient printers so you can View conversations per node While streaming.
5) Researcher Agent Node
from langchain_core.output_parsers import StrOutputParser
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplatedef research_agent_node(state):
messages = state("messages")
user_request = messages(0).content # BaseMessage OK
search_results = web_search.invoke(user_request)
research_content = ()
for result in search_results("results")(:3):
research_content.append(f"Source: {result('url')}nContent: {result('content')}n")
research_text = "n---n".join(research_content)
research_prompt = ChatPromptTemplate.from_messages((
("system", """You're the Researcher. You have access to fragments from the 'search engine' (RESEARCH).
Your goal is to provide factual, concise facts and data supporting the topic.
Refer to the fragments and do not invent. Write concretely and briefly.
After you finish your task, respond directly to the moderator.
Reply using only the results of your work, do not include the text."""),
("user", "Topic: {topic}nnSearch results:n{research_data}")
))
model = ChatOpenAI(model="gpt-4", temperature=0)
chain = research_prompt | model | StrOutputParser()
response = chain.invoke({"topic": user_request, "research_data": research_text})
return {"messages": ({"role": "assistant", "content": response, "name": "research_agent"})}
- Play Tawili, composes ground piecesand indicates a researcher LLM for synthesizing concise notes (no hallucinations, no extra nonsense).
- Return a single supporting message named
"research_agent"LangGraph will move it to the next node
6) Writer Agent Node
from langchain_core.messages import BaseMessagedef _name_and_content(msg):
if isinstance(msg, dict):
return msg.get("name", msg.get("role", "user")), msg.get("content", str(msg))
if isinstance(msg, BaseMessage):
name = getattr(msg, "name", None) or getattr(msg, "type", "user")
return name, msg.content
# fallback
return "user", str(msg)
def author_agent_node(state):
messages = state("messages")
has_critic_already_intervened = state.get("critic_marked", False)if has_critic_already_intervened:
author_prompt = ChatPromptTemplate.from_messages((
("system", """You're the Expert-Editor. You have already received feedback from the Critic and you must improve your text.
Analyze the critic's comments and provide a revised version of the article that addresses their concerns.
Include all suggestions and fill in the indicated gaps. Write concretely.
After you finish your task, respond directly to the moderator.
Reply using only the results of your work, do not include the text."""),
("user", "Conversation history:n{conversation_history}nnRevise the article taking into account the critic's feedback.")
))
else:
author_prompt = ChatPromptTemplate.from_messages((
("system", """You're the Expert. From the available facts, propose a practical solution/plan.
Include steps, requirements, and a minimal set of decisions. Write concretely.
After you finish your task, respond directly to the moderator.
Reply using only the results of your work, do not include the text."""),
("user", "Based on the research, create an article:n{conversation_history}")
))
conversation_history = "n".join(
f"{_name_and_content(m)(0)}: {_name_and_content(m)(1)}"
for m in messages
)
model = ChatOpenAI(model="gpt-4", temperature=0)
chain = author_prompt | model | StrOutputParser()
response = chain.invoke({"conversation_history": conversation_history})
return {"messages": ({"role": "assistant", "content": response, "name": "author_agent"})}
- If reviewer has already been weighed, the author acts as a expert-editorRevision based on feedback.
- Otherwise, the writer prepares a first pass through the research.
- complete conversation history Packaged so that the author can reference prior turns.
7) Critical Agent Node + Stop/Loop Signal
def critic_agent_node(state):
messages = state("messages")conversation_history = "n".join(
f"{_name_and_content(m)(0)}: {_name_and_content(m)(1)}"
for m in messages
)
critic_prompt = ChatPromptTemplate.from_messages((
("system", """You're the Critic. Your role is to find gaps, risks, and ambiguities in the proposal.
Ask probing questions and point out missing elements. Be constructive.
After analyzing the author's work, if you see significant gaps or problems, end your response with the words: "needs improvement"
If the work is satisfactory and does not require significant corrections, end with the words: "approved"
After you finish your task, respond directly to the moderator.
Reply using only the results of your work, do not include the text."""),
("user", "Analyze the author's work:n{conversation_history}")
))
model = ChatOpenAI(model="gpt-4", temperature=0)
chain = critic_prompt | model | StrOutputParser()
response = chain.invoke({"conversation_history": conversation_history})
updates = {
"messages": ({"role": "assistant", "content": response, "name": "critic_agent"})
}
if not state.get("critic_marked", False):
updates("critic_marked") = True
return updates
def should_continue(state):
last = state("messages")(-1)
_, content = _name_and_content(last)
content = content.lower()
return "author_agent" if "needs improvement" in content else "end"
- reviewer Audits the writer’s output and also clearly prompts “allowed” (stop) or “Improvement needed” (Horoscope).
-
should_continueThe function reads the last message and decides the next edge.
8) Make a graph
from typing import TypedDict, Annotated
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messagesclass WorkflowState(TypedDict):
messages: Annotated(list, add_messages)
critic_marked: bool
supervisor = (
StateGraph(WorkflowState)
.add_node("research_agent", research_agent_node)
.add_node("author_agent", author_agent_node)
.add_node("critic_agent", critic_agent_node)
.add_edge(START, "research_agent")
.add_edge("research_agent", "author_agent")
.add_edge("author_agent", "critic_agent")
.add_conditional_edges("critic_agent", should_continue, {
"author_agent": "author_agent",
"end": END
})
.compile(debug=True)
)
- Publicity State and string a sequence with conditional loop After the critic.
compile(debug=True)Enables rich step traces for troubleshooting. See the LangGraph Graph API documentation for more information on states, nodes, and conditional edges.
9) Display graph
Render a PNG of your graph – useful for documents and graph flow analysis.
from IPython.display import display, Imagedisplay(Image(supervisor.get_graph().draw_png()))

10) Run graphs and generate graphs
Now it’s time to run the application and generate the graph. I am attaching the full generated text at the end of this article. The topic is very similar so it might be interesting for you too 🙂
for chunk in supervisor.stream(
{
"messages": ({
"role": "user",
"content": "Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library",
}),
"critic_marked": False,
},
subgraphs=True,
):
pretty_print_messages(chunk, last_message=True)
Output:
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700')), 'critic_marked': False}
(updates) {'research_agent': {'messages': ({'role': 'assistant', 'content': "Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", 'name': 'research_agent'})}}
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", additional_kwargs={}, response_metadata={}, name='research_agent', id='d110f597-bae6-4087-ac28-51eeadc3dd3e')), 'critic_marked': False}
(updates) {'author_agent': {'messages': ({'role': 'assistant', 'content': "Building a 'Publisher' agent application based on the LangGraph library involves the following steps:nn1. **Understand the LangGraph Library**: Before starting the development process, it's crucial to understand the LangGraph library and its functionalities. LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents.nn2. **Define the Agent's Characteristics**: The next step is to define what makes your 'Publisher' agent different or similar to previous software. This can be achieved by understanding the key differences identified through building many agents and working with teams from various companies.nn3. **Design the Agent's Workflow**: LangGraph is used to create stateful, multi-actor AI agents with cyclic workflows. Therefore, you need to design a cyclic workflow for your 'Publisher' agent. This involves defining the tasks the agent will perform, the order in which they will be performed, and how the agent will transition from one task to another.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit and Pydantic-AI are tools that can be used to build a Data Analyst Agent. Streamlit is an open-source app framework for Machine Learning and Data Science teams. Pydantic-AI is a data validation and settings management using python type annotations.nn5. **Create an Agentic AI Application Using LangGraph**: The final step is to use LangGraph to create an Agentic AI application. This involves integrating the Data Analyst Agent built with Streamlit and Pydantic-AI into the LangGraph framework.nnRequirements:n- Knowledge of the LangGraph library and its functionalitiesn- Understanding of the LangChain frameworkn- Familiarity with Streamlit and Pydantic-AIn- Ability to design and implement cyclic workflowsn- Skills in building AI applicationsnnDecisions:n- The specific characteristics and functionalities of the 'Publisher' agentn- The design of the agent's cyclic workflown- The use of Streamlit and Pydantic-AI in building the Data Analyst Agentn- The integration of the Data Analyst Agent into the LangGraph framework to create an Agentic AI application.", 'name': 'author_agent'})}}
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", additional_kwargs={}, response_metadata={}, name='research_agent', id='d110f597-bae6-4087-ac28-51eeadc3dd3e'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves the following steps:nn1. **Understand the LangGraph Library**: Before starting the development process, it's crucial to understand the LangGraph library and its functionalities. LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents.nn2. **Define the Agent's Characteristics**: The next step is to define what makes your 'Publisher' agent different or similar to previous software. This can be achieved by understanding the key differences identified through building many agents and working with teams from various companies.nn3. **Design the Agent's Workflow**: LangGraph is used to create stateful, multi-actor AI agents with cyclic workflows. Therefore, you need to design a cyclic workflow for your 'Publisher' agent. This involves defining the tasks the agent will perform, the order in which they will be performed, and how the agent will transition from one task to another.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit and Pydantic-AI are tools that can be used to build a Data Analyst Agent. Streamlit is an open-source app framework for Machine Learning and Data Science teams. Pydantic-AI is a data validation and settings management using python type annotations.nn5. **Create an Agentic AI Application Using LangGraph**: The final step is to use LangGraph to create an Agentic AI application. This involves integrating the Data Analyst Agent built with Streamlit and Pydantic-AI into the LangGraph framework.nnRequirements:n- Knowledge of the LangGraph library and its functionalitiesn- Understanding of the LangChain frameworkn- Familiarity with Streamlit and Pydantic-AIn- Ability to design and implement cyclic workflowsn- Skills in building AI applicationsnnDecisions:n- The specific characteristics and functionalities of the 'Publisher' agentn- The design of the agent's cyclic workflown- The use of Streamlit and Pydantic-AI in building the Data Analyst Agentn- The integration of the Data Analyst Agent into the LangGraph framework to create an Agentic AI application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='b38ab68b-6bf1-4636-99e0-b0396d41394a')), 'critic_marked': False}
(updates) {'critic_agent': {'messages': ({'role': 'assistant', 'content': "The author's work provides a general overview of the process of building a 'Publisher' agent application based on the LangGraph library. However, there are several gaps and ambiguities that need to be addressed:nn1. **Specificity of the 'Publisher' Agent**: The author mentions that the 'Publisher' agent's characteristics need to be defined, but does not provide any specific details about what these characteristics might be. What are the specific tasks that the 'Publisher' agent is expected to perform? What makes it different from other agents?nn2. **Understanding of the LangGraph Library**: The author states that understanding the LangGraph library is crucial, but does not provide any resources or methods for gaining this understanding. How can one learn about the LangGraph library? Are there any recommended resources or tutorials?nn3. **Designing the Agent's Workflow**: The author mentions that a cyclic workflow needs to be designed for the 'Publisher' agent, but does not provide any details about what this workflow might look like. What are the specific steps in the workflow? How does the agent transition from one task to another?nn4. **Building a Data Analyst Agent with Streamlit and Pydantic-AI**: The author mentions that Streamlit and Pydantic-AI are used to build a Data Analyst Agent, but does not provide any details about how this is done. What are the specific steps involved in building a Data Analyst Agent with these tools?nn5. **Creating an Agentic AI Application Using LangGraph**: The author mentions that an Agentic AI application is created using LangGraph, but does not provide any details about how this is done. How is the Data Analyst Agent integrated into the LangGraph framework? What are the specific steps involved in creating an Agentic AI application?nn6. **Requirements and Decisions**: The author lists several requirements and decisions, but does not provide any details about how these requirements can be met or how these decisions can be made. How can one gain the necessary knowledge and skills? How should one go about making these decisions?nnIn conclusion, the author's work provides a general overview of the process, but lacks specific details and guidance. It needs improvement.", 'name': 'critic_agent'}), 'critic_marked': True}}
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", additional_kwargs={}, response_metadata={}, name='research_agent', id='d110f597-bae6-4087-ac28-51eeadc3dd3e'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves the following steps:nn1. **Understand the LangGraph Library**: Before starting the development process, it's crucial to understand the LangGraph library and its functionalities. LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents.nn2. **Define the Agent's Characteristics**: The next step is to define what makes your 'Publisher' agent different or similar to previous software. This can be achieved by understanding the key differences identified through building many agents and working with teams from various companies.nn3. **Design the Agent's Workflow**: LangGraph is used to create stateful, multi-actor AI agents with cyclic workflows. Therefore, you need to design a cyclic workflow for your 'Publisher' agent. This involves defining the tasks the agent will perform, the order in which they will be performed, and how the agent will transition from one task to another.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit and Pydantic-AI are tools that can be used to build a Data Analyst Agent. Streamlit is an open-source app framework for Machine Learning and Data Science teams. Pydantic-AI is a data validation and settings management using python type annotations.nn5. **Create an Agentic AI Application Using LangGraph**: The final step is to use LangGraph to create an Agentic AI application. This involves integrating the Data Analyst Agent built with Streamlit and Pydantic-AI into the LangGraph framework.nnRequirements:n- Knowledge of the LangGraph library and its functionalitiesn- Understanding of the LangChain frameworkn- Familiarity with Streamlit and Pydantic-AIn- Ability to design and implement cyclic workflowsn- Skills in building AI applicationsnnDecisions:n- The specific characteristics and functionalities of the 'Publisher' agentn- The design of the agent's cyclic workflown- The use of Streamlit and Pydantic-AI in building the Data Analyst Agentn- The integration of the Data Analyst Agent into the LangGraph framework to create an Agentic AI application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='b38ab68b-6bf1-4636-99e0-b0396d41394a'), AIMessage(content="The author's work provides a general overview of the process of building a 'Publisher' agent application based on the LangGraph library. However, there are several gaps and ambiguities that need to be addressed:nn1. **Specificity of the 'Publisher' Agent**: The author mentions that the 'Publisher' agent's characteristics need to be defined, but does not provide any specific details about what these characteristics might be. What are the specific tasks that the 'Publisher' agent is expected to perform? What makes it different from other agents?nn2. **Understanding of the LangGraph Library**: The author states that understanding the LangGraph library is crucial, but does not provide any resources or methods for gaining this understanding. How can one learn about the LangGraph library? Are there any recommended resources or tutorials?nn3. **Designing the Agent's Workflow**: The author mentions that a cyclic workflow needs to be designed for the 'Publisher' agent, but does not provide any details about what this workflow might look like. What are the specific steps in the workflow? How does the agent transition from one task to another?nn4. **Building a Data Analyst Agent with Streamlit and Pydantic-AI**: The author mentions that Streamlit and Pydantic-AI are used to build a Data Analyst Agent, but does not provide any details about how this is done. What are the specific steps involved in building a Data Analyst Agent with these tools?nn5. **Creating an Agentic AI Application Using LangGraph**: The author mentions that an Agentic AI application is created using LangGraph, but does not provide any details about how this is done. How is the Data Analyst Agent integrated into the LangGraph framework? What are the specific steps involved in creating an Agentic AI application?nn6. **Requirements and Decisions**: The author lists several requirements and decisions, but does not provide any details about how these requirements can be met or how these decisions can be made. How can one gain the necessary knowledge and skills? How should one go about making these decisions?nnIn conclusion, the author's work provides a general overview of the process, but lacks specific details and guidance. It needs improvement.", additional_kwargs={}, response_metadata={}, name='critic_agent', id='ffa9d5fd-f0ab-4758-8db5-0f685f5034e7')), 'critic_marked': True}
(updates) {'author_agent': {'messages': ({'role': 'assistant', 'content': "Building a 'Publisher' agent application based on the LangGraph library is a multi-step process that requires a deep understanding of the LangGraph library, the LangChain framework, and the tools Streamlit and Pydantic-AI. This article will provide a detailed guide on how to build such an application, including the specific tasks that the 'Publisher' agent is expected to perform, the design of the agent's workflow, and the steps involved in building a Data Analyst Agent and creating an Agentic AI application.nn1. **Understand the LangGraph Library**: LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. To gain a deep understanding of the LangGraph library, you can start by exploring the official LangGraph documentation and tutorials available on their website. Participating in online forums and communities related to LangGraph can also be beneficial.nn2. **Define the 'Publisher' Agent's Characteristics**: The 'Publisher' agent is designed to automate the process of publishing content across various platforms. It should be capable of scheduling posts, managing content, and tracking analytics. The agent's characteristics should be defined based on these tasks. For instance, it should have the ability to interface with various publishing platforms, handle time-based tasks, and process analytical data.nn3. **Design the Agent's Workflow**: The 'Publisher' agent's workflow should be cyclic, meaning it repeats a series of tasks in a loop. A typical workflow might involve fetching content from a database, scheduling it for publication, publishing the content, and then tracking and analyzing the performance of the published content. The agent should be designed to transition smoothly from one task to another, with error handling mechanisms in place to manage any issues that arise.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit is an open-source app framework for Machine Learning and Data Science teams, while Pydantic-AI is a data validation and settings management tool using python type annotations. To build a Data Analyst Agent, start by setting up a Streamlit application and integrating it with Pydantic-AI for data validation. Then, design the agent to fetch, process, and analyze data from the 'Publisher' agent's activities.nn5. **Create an Agentic AI Application Using LangGraph**: Once the Data Analyst Agent is built, it can be integrated into the LangGraph framework to create an Agentic AI application. This involves setting up a LangGraph application, defining the agent's state and behavior within the application, and integrating the Data Analyst Agent into the application's workflow.nnRequirements and Decisions:n- **Requirements**: Gaining the necessary knowledge and skills can be achieved through self-study, online courses, and practical experience. Resources like the LangGraph documentation, online tutorials, and community forums can be invaluable.n- **Decisions**: Decisions regarding the 'Publisher' agent's characteristics, the design of the workflow, and the use of tools should be made based on the specific needs of the project. Consider factors like the platforms the agent will interface with, the complexity of the tasks it will perform, and the resources available for the project.nnIn conclusion, building a 'Publisher' agent application based on the LangGraph library involves a deep understanding of the library and the tools involved, careful planning and design of the agent and its workflow, and the integration of various components to create a cohesive application.", 'name': 'author_agent'})}}
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", additional_kwargs={}, response_metadata={}, name='research_agent', id='d110f597-bae6-4087-ac28-51eeadc3dd3e'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves the following steps:nn1. **Understand the LangGraph Library**: Before starting the development process, it's crucial to understand the LangGraph library and its functionalities. LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents.nn2. **Define the Agent's Characteristics**: The next step is to define what makes your 'Publisher' agent different or similar to previous software. This can be achieved by understanding the key differences identified through building many agents and working with teams from various companies.nn3. **Design the Agent's Workflow**: LangGraph is used to create stateful, multi-actor AI agents with cyclic workflows. Therefore, you need to design a cyclic workflow for your 'Publisher' agent. This involves defining the tasks the agent will perform, the order in which they will be performed, and how the agent will transition from one task to another.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit and Pydantic-AI are tools that can be used to build a Data Analyst Agent. Streamlit is an open-source app framework for Machine Learning and Data Science teams. Pydantic-AI is a data validation and settings management using python type annotations.nn5. **Create an Agentic AI Application Using LangGraph**: The final step is to use LangGraph to create an Agentic AI application. This involves integrating the Data Analyst Agent built with Streamlit and Pydantic-AI into the LangGraph framework.nnRequirements:n- Knowledge of the LangGraph library and its functionalitiesn- Understanding of the LangChain frameworkn- Familiarity with Streamlit and Pydantic-AIn- Ability to design and implement cyclic workflowsn- Skills in building AI applicationsnnDecisions:n- The specific characteristics and functionalities of the 'Publisher' agentn- The design of the agent's cyclic workflown- The use of Streamlit and Pydantic-AI in building the Data Analyst Agentn- The integration of the Data Analyst Agent into the LangGraph framework to create an Agentic AI application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='b38ab68b-6bf1-4636-99e0-b0396d41394a'), AIMessage(content="The author's work provides a general overview of the process of building a 'Publisher' agent application based on the LangGraph library. However, there are several gaps and ambiguities that need to be addressed:nn1. **Specificity of the 'Publisher' Agent**: The author mentions that the 'Publisher' agent's characteristics need to be defined, but does not provide any specific details about what these characteristics might be. What are the specific tasks that the 'Publisher' agent is expected to perform? What makes it different from other agents?nn2. **Understanding of the LangGraph Library**: The author states that understanding the LangGraph library is crucial, but does not provide any resources or methods for gaining this understanding. How can one learn about the LangGraph library? Are there any recommended resources or tutorials?nn3. **Designing the Agent's Workflow**: The author mentions that a cyclic workflow needs to be designed for the 'Publisher' agent, but does not provide any details about what this workflow might look like. What are the specific steps in the workflow? How does the agent transition from one task to another?nn4. **Building a Data Analyst Agent with Streamlit and Pydantic-AI**: The author mentions that Streamlit and Pydantic-AI are used to build a Data Analyst Agent, but does not provide any details about how this is done. What are the specific steps involved in building a Data Analyst Agent with these tools?nn5. **Creating an Agentic AI Application Using LangGraph**: The author mentions that an Agentic AI application is created using LangGraph, but does not provide any details about how this is done. How is the Data Analyst Agent integrated into the LangGraph framework? What are the specific steps involved in creating an Agentic AI application?nn6. **Requirements and Decisions**: The author lists several requirements and decisions, but does not provide any details about how these requirements can be met or how these decisions can be made. How can one gain the necessary knowledge and skills? How should one go about making these decisions?nnIn conclusion, the author's work provides a general overview of the process, but lacks specific details and guidance. It needs improvement.", additional_kwargs={}, response_metadata={}, name='critic_agent', id='ffa9d5fd-f0ab-4758-8db5-0f685f5034e7'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library is a multi-step process that requires a deep understanding of the LangGraph library, the LangChain framework, and the tools Streamlit and Pydantic-AI. This article will provide a detailed guide on how to build such an application, including the specific tasks that the 'Publisher' agent is expected to perform, the design of the agent's workflow, and the steps involved in building a Data Analyst Agent and creating an Agentic AI application.nn1. **Understand the LangGraph Library**: LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. To gain a deep understanding of the LangGraph library, you can start by exploring the official LangGraph documentation and tutorials available on their website. Participating in online forums and communities related to LangGraph can also be beneficial.nn2. **Define the 'Publisher' Agent's Characteristics**: The 'Publisher' agent is designed to automate the process of publishing content across various platforms. It should be capable of scheduling posts, managing content, and tracking analytics. The agent's characteristics should be defined based on these tasks. For instance, it should have the ability to interface with various publishing platforms, handle time-based tasks, and process analytical data.nn3. **Design the Agent's Workflow**: The 'Publisher' agent's workflow should be cyclic, meaning it repeats a series of tasks in a loop. A typical workflow might involve fetching content from a database, scheduling it for publication, publishing the content, and then tracking and analyzing the performance of the published content. The agent should be designed to transition smoothly from one task to another, with error handling mechanisms in place to manage any issues that arise.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit is an open-source app framework for Machine Learning and Data Science teams, while Pydantic-AI is a data validation and settings management tool using python type annotations. To build a Data Analyst Agent, start by setting up a Streamlit application and integrating it with Pydantic-AI for data validation. Then, design the agent to fetch, process, and analyze data from the 'Publisher' agent's activities.nn5. **Create an Agentic AI Application Using LangGraph**: Once the Data Analyst Agent is built, it can be integrated into the LangGraph framework to create an Agentic AI application. This involves setting up a LangGraph application, defining the agent's state and behavior within the application, and integrating the Data Analyst Agent into the application's workflow.nnRequirements and Decisions:n- **Requirements**: Gaining the necessary knowledge and skills can be achieved through self-study, online courses, and practical experience. Resources like the LangGraph documentation, online tutorials, and community forums can be invaluable.n- **Decisions**: Decisions regarding the 'Publisher' agent's characteristics, the design of the workflow, and the use of tools should be made based on the specific needs of the project. Consider factors like the platforms the agent will interface with, the complexity of the tasks it will perform, and the resources available for the project.nnIn conclusion, building a 'Publisher' agent application based on the LangGraph library involves a deep understanding of the library and the tools involved, careful planning and design of the agent and its workflow, and the integration of various components to create a cohesive application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='da4482cc-44ac-4917-b01f-166f66899860')), 'critic_marked': True}
(updates) {'critic_agent': {'messages': ({'role': 'assistant', 'content': "The author's revised work provides a more detailed and comprehensive guide on building a 'Publisher' agent application based on the LangGraph library. The author has addressed the previous gaps and ambiguities by providing specific details about the 'Publisher' agent's tasks, the design of the agent's workflow, and the steps involved in building a Data Analyst Agent and creating an Agentic AI application. The author has also provided resources and methods for gaining the necessary knowledge and skills, and has given guidance on how to make the necessary decisions. The revised work is approved.", 'name': 'critic_agent'})}}
(values) {'messages': (HumanMessage(content="Generate an article on the topic: How to build a 'Publisher' agent application based on the LangGraph library", additional_kwargs={}, response_metadata={}, id='233e0228-4746-43e6-a5b7-08c82e480700'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves several steps. LangGraph, an extension of the LangChain framework, is a powerful tool that simplifies the creation of sophisticated, stateful AI agents capable of handling complex tasks. It is designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications.nnLangGraph was launched as a low-level agent framework and has been used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. It builds upon feedback from the super popular LangChain framework and rethinks how agent frameworks should work for production.nnThe process of building an agent with LangGraph involves defining what makes an agent different (or similar) to previous software. This is achieved by understanding the key differences that have been identified through building many agents and working with teams from various companies.nnLangGraph is also used to create stateful, multi-actor AI agents with cyclic workflows. This makes it an ideal tool for building a 'Publisher' agent application. The process involves building a Data Analyst Agent with Streamlit and Pydantic-AI and creating an Agentic AI application using LangGraph.", additional_kwargs={}, response_metadata={}, name='research_agent', id='d110f597-bae6-4087-ac28-51eeadc3dd3e'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library involves the following steps:nn1. **Understand the LangGraph Library**: Before starting the development process, it's crucial to understand the LangGraph library and its functionalities. LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents.nn2. **Define the Agent's Characteristics**: The next step is to define what makes your 'Publisher' agent different or similar to previous software. This can be achieved by understanding the key differences identified through building many agents and working with teams from various companies.nn3. **Design the Agent's Workflow**: LangGraph is used to create stateful, multi-actor AI agents with cyclic workflows. Therefore, you need to design a cyclic workflow for your 'Publisher' agent. This involves defining the tasks the agent will perform, the order in which they will be performed, and how the agent will transition from one task to another.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit and Pydantic-AI are tools that can be used to build a Data Analyst Agent. Streamlit is an open-source app framework for Machine Learning and Data Science teams. Pydantic-AI is a data validation and settings management using python type annotations.nn5. **Create an Agentic AI Application Using LangGraph**: The final step is to use LangGraph to create an Agentic AI application. This involves integrating the Data Analyst Agent built with Streamlit and Pydantic-AI into the LangGraph framework.nnRequirements:n- Knowledge of the LangGraph library and its functionalitiesn- Understanding of the LangChain frameworkn- Familiarity with Streamlit and Pydantic-AIn- Ability to design and implement cyclic workflowsn- Skills in building AI applicationsnnDecisions:n- The specific characteristics and functionalities of the 'Publisher' agentn- The design of the agent's cyclic workflown- The use of Streamlit and Pydantic-AI in building the Data Analyst Agentn- The integration of the Data Analyst Agent into the LangGraph framework to create an Agentic AI application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='b38ab68b-6bf1-4636-99e0-b0396d41394a'), AIMessage(content="The author's work provides a general overview of the process of building a 'Publisher' agent application based on the LangGraph library. However, there are several gaps and ambiguities that need to be addressed:nn1. **Specificity of the 'Publisher' Agent**: The author mentions that the 'Publisher' agent's characteristics need to be defined, but does not provide any specific details about what these characteristics might be. What are the specific tasks that the 'Publisher' agent is expected to perform? What makes it different from other agents?nn2. **Understanding of the LangGraph Library**: The author states that understanding the LangGraph library is crucial, but does not provide any resources or methods for gaining this understanding. How can one learn about the LangGraph library? Are there any recommended resources or tutorials?nn3. **Designing the Agent's Workflow**: The author mentions that a cyclic workflow needs to be designed for the 'Publisher' agent, but does not provide any details about what this workflow might look like. What are the specific steps in the workflow? How does the agent transition from one task to another?nn4. **Building a Data Analyst Agent with Streamlit and Pydantic-AI**: The author mentions that Streamlit and Pydantic-AI are used to build a Data Analyst Agent, but does not provide any details about how this is done. What are the specific steps involved in building a Data Analyst Agent with these tools?nn5. **Creating an Agentic AI Application Using LangGraph**: The author mentions that an Agentic AI application is created using LangGraph, but does not provide any details about how this is done. How is the Data Analyst Agent integrated into the LangGraph framework? What are the specific steps involved in creating an Agentic AI application?nn6. **Requirements and Decisions**: The author lists several requirements and decisions, but does not provide any details about how these requirements can be met or how these decisions can be made. How can one gain the necessary knowledge and skills? How should one go about making these decisions?nnIn conclusion, the author's work provides a general overview of the process, but lacks specific details and guidance. It needs improvement.", additional_kwargs={}, response_metadata={}, name='critic_agent', id='ffa9d5fd-f0ab-4758-8db5-0f685f5034e7'), AIMessage(content="Building a 'Publisher' agent application based on the LangGraph library is a multi-step process that requires a deep understanding of the LangGraph library, the LangChain framework, and the tools Streamlit and Pydantic-AI. This article will provide a detailed guide on how to build such an application, including the specific tasks that the 'Publisher' agent is expected to perform, the design of the agent's workflow, and the steps involved in building a Data Analyst Agent and creating an Agentic AI application.nn1. **Understand the LangGraph Library**: LangGraph is an extension of the LangChain framework, designed for stateful, cyclic, and multi-actor Large Language Model (LLM) applications. It's used by companies like LinkedIn, Uber, and Klarna to build production-ready agents. To gain a deep understanding of the LangGraph library, you can start by exploring the official LangGraph documentation and tutorials available on their website. Participating in online forums and communities related to LangGraph can also be beneficial.nn2. **Define the 'Publisher' Agent's Characteristics**: The 'Publisher' agent is designed to automate the process of publishing content across various platforms. It should be capable of scheduling posts, managing content, and tracking analytics. The agent's characteristics should be defined based on these tasks. For instance, it should have the ability to interface with various publishing platforms, handle time-based tasks, and process analytical data.nn3. **Design the Agent's Workflow**: The 'Publisher' agent's workflow should be cyclic, meaning it repeats a series of tasks in a loop. A typical workflow might involve fetching content from a database, scheduling it for publication, publishing the content, and then tracking and analyzing the performance of the published content. The agent should be designed to transition smoothly from one task to another, with error handling mechanisms in place to manage any issues that arise.nn4. **Build a Data Analyst Agent with Streamlit and Pydantic-AI**: Streamlit is an open-source app framework for Machine Learning and Data Science teams, while Pydantic-AI is a data validation and settings management tool using python type annotations. To build a Data Analyst Agent, start by setting up a Streamlit application and integrating it with Pydantic-AI for data validation. Then, design the agent to fetch, process, and analyze data from the 'Publisher' agent's activities.nn5. **Create an Agentic AI Application Using LangGraph**: Once the Data Analyst Agent is built, it can be integrated into the LangGraph framework to create an Agentic AI application. This involves setting up a LangGraph application, defining the agent's state and behavior within the application, and integrating the Data Analyst Agent into the application's workflow.nnRequirements and Decisions:n- **Requirements**: Gaining the necessary knowledge and skills can be achieved through self-study, online courses, and practical experience. Resources like the LangGraph documentation, online tutorials, and community forums can be invaluable.n- **Decisions**: Decisions regarding the 'Publisher' agent's characteristics, the design of the workflow, and the use of tools should be made based on the specific needs of the project. Consider factors like the platforms the agent will interface with, the complexity of the tasks it will perform, and the resources available for the project.nnIn conclusion, building a 'Publisher' agent application based on the LangGraph library involves a deep understanding of the library and the tools involved, careful planning and design of the agent and its workflow, and the integration of various components to create a cohesive application.", additional_kwargs={}, response_metadata={}, name='author_agent', id='da4482cc-44ac-4917-b01f-166f66899860'), AIMessage(content="The author's revised work provides a more detailed and comprehensive guide on building a 'Publisher' agent application based on the LangGraph library. The author has addressed the previous gaps and ambiguities by providing specific details about the 'Publisher' agent's tasks, the design of the agent's workflow, and the steps involved in building a Data Analyst Agent and creating an Agentic AI application. The author has also provided resources and methods for gaining the necessary knowledge and skills, and has given guidance on how to make the necessary decisions. The revised work is approved.", additional_kwargs={}, response_metadata={}, name='critic_agent', id='a44b905d-957b-4ec5-96e2-a0d50e2aac62')), 'critic_marked': True}
Final reaction:
(content=”Building a ‘Publisher’ agent application based on the LangGraph library is a multi-step process that requires a deep understanding of the LangGraph library, the LangChain framework, and the tools Streamlit and Pydantic-AI. This article will provide a detailed guide on how to build such an application, including the specific tasks to be performed from the ‘Publisher’ agent, the design of the agent’s workflow, and the steps involved in building a data analyzer agent and an agent AI. Building Applications. **Understand the Langgraph Library**: Langgraph is an extension of the Langgraph framework, designed for stateful, multi-actor Large Language Model (LLM) applications. To gain a deeper understanding of the Langgraph library, you can start by exploring the Langgraph documentation. And tutorials are available on their website. It may also be beneficial to participate in Langgraph-related online forums.nn2. Define the ‘Publisher’ agent**: The ‘Publisher’ agent is designed to automate the process of publishing content on different platforms, it should be able to interface with different publishing platforms, and process analytical data. Must have the capability.nn3. **Design the agent’s workflow**: The ‘Publisher’ agent’s workflow should be cyclical, meaning it repeats a series of tasks in a loop, this could include retrieving content from the database, scheduling it for publication, and then tracking and analyzing the performance of the published content, with error handling mechanisms to manage any issues. Must be designed with Pydentic-AI. Then, design the agent to fetch, process and analyze the data from the ‘Publisher’ agent.nnRequirements and decisions:n- **Requirements**: Acquiring the necessary knowledge and skills can be achieved through self-study, online courses and practical experience like Langgraph documentation, online tutorials and community forums. resources can be invaluable
Here’s also Critic Node’s final response:
The author's revised work provides a more detailed and comprehensive guide on
building a 'Publisher' agent application based on the LangGraph library.
The author has addressed the previous gaps and ambiguities by providing
specific details about the 'Publisher' agent's tasks, the design of the
agent's workflow, and the steps involved in building a Data Analyst Agent and
creating an Agentic AI application. The author has also provided resources
and methods for gaining the necessary knowledge and skills, and has given
guidance on how to make the necessary decisions. The revised work is approved.
Why does this design work for publishing?
- round first pass: The researcher obliges the author with verifiable input (tewili).
- iterative refinement: The critic enforces minimum quality – stopping only one step back if “improvement is needed”.
- testable components: You can unit-test each node with fixed inputs; Change models as needed.
- extensible:Add roles (eg, seo optimizer, fact-checker, style editor), or tools (citation fetchers, CMS uploaders).
That’s all in this chapter dedicated to AI agent applications with Agent Publisher. In the next chapter we will create a more fully functional application of an AI expert discussion panel.
Look next chapter
Look previous chapter
See the full code from this article in GitHub treasury
Published via Towards AI
