Author(s): michaelczarnecki
Originally published on Towards AI.
hello. This article covers important features and syntax from new releases of the Langchain library since v1.0.0.
For more examples and explanations related to Langchain and Langgraph libraries see my dedicated article seriesFor more features related to Langchain v1,XX check out the official Langchain documentation,
I have used it frequently in the article series mentioned above. langchain_classic Library – because this is where the “classic” Langchain era components were moved when the framework began to evolve into its new, more modular shape.
AI development goes on FastThe libraries change from month to month – literally! Furthermore, more and more applications are shifting towards “one signal → one LLM call” Agent-Based Workflow – Because agents can plan, call tools, recover from errors, and iterate.
Langchain authors are also constantly improving the developer experience. That’s why I langchain 1.0.0+ You can use new building blocks – one of them is create_agentWhich I will demonstrate with the code snippet below.
There is another strong trend MCP (Model Reference Protocol) – A protocol that standardizes how models communicate with external devices. On one side you have the model, on the other you have the API, database and utilities, and between them sits a piece of software that facilitates communication: a mcp server,
Let’s move on to the details and code samples.
Install required libraries and load environment variables
First install the helper for langchain and environment variables and mcp tooling.
!pip install -q langchain python-dotenv langchain_mcp_adapters fastmcp
If you keep the API keys .env file, it loads them into the runtime environment.
from dotenv import load_dotenvload_dotenv()
create_agent (Agent + Tool)
A minimal agent with one tool. Model can decide when to call rate_cityThen respond like a normal chat assistant.
from langchain.agents import create_agentdef rate_city(city: str) -> str:
"""Rate the city."""
return f"{city} is the best place in the world!"
agent = create_agent(
model="gpt-5-mini",
tools=(rate_city),
system_prompt="You are a helpful assistant",
)
result = agent.invoke({"messages": ({"role": "user", "content": "Is Poznań a nice city?."})})
last_msg = result("messages")(-1)
print(last_msg.content)
Working with message objects
This shows the “message-first” style: clear SystemMessage And HumanMessageand a field invoke() one is returning AIMessage,
from langchain.chat_models import init_chat_model
from langchain.messages import SystemMessage, HumanMessagechat = init_chat_model("gpt-5-mini")
messages = (
SystemMessage("You are a concise assistant."),
HumanMessage("Write a 1-sentence summary of what LangChain is."),
)
ai_msg = chat.invoke(messages) # -> AIMessage
print(ai_msg.content)
structured output with response_format
Here the agent returns structured data validated by the pedantic model. You get predictive fields instead of “whatever the model felt like typing today”.
from pydantic import BaseModel, Field
from langchain.agents import create_agentclass ContactInfo(BaseModel):
"""Contact information for a person."""
name: str = Field(description="The name of the person")
email: str = Field(description="The email address")
phone: str = Field(description="The phone number")
agent = create_agent(
model="gpt-5-mini",
response_format=ContactInfo,
)
result = agent.invoke({
"messages": (
{"role": "user", "content": "Extract contact info from: John Doe, john@example.com, (555) 123-4567"}
)
})
structured = result("structured_response")
print(structured)
print(type(structured))
short-term memory using checkpoint
This demonstrates stateful interaction through Part A. thread_idIn Langchain we can now use InMemorySaver, a component added directly as a parameter to create_agents to support short-term memory,
The second question in the code snippet (“What is my name?”) depends on the previous message.
from langchain.agents import create_agent
from langgraph.checkpoint.memory import InMemorySavercheckpointer = InMemorySaver()
agent = create_agent(
model="gpt-5-mini",
checkpointer=checkpointer,
)
config = {"configurable": {"thread_id": "demo-thread-1"}}
agent.invoke({"messages": ({"role": "user", "content": "Hi! My name is Michael."})}, config=config)
result = agent.invoke({"messages": ({"role": "user", "content": "What is my name?"})}, config=config)
print(result("messages")(-1).content)
Human-in-the-loop middleware
Human reviewers and approvers are still important in many agentive workflows. This is the classic enterprise pattern: the agent can create an action (like sending an email), but execution can be interrupted until a human approves/edits/rejects the decision.
from langchain.agents import create_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
from langgraph.checkpoint.memory import InMemorySaver
from langgraph.types import Commanddef read_email(email_id: str) -> str:
"""Read email mock function"""
return f"(mock) Email content for id={email_id}"
def send_email(recipient: str, subject: str, body: str) -> str:
"""Send email mock function"""
return f"(mock) Sent email to {recipient} with subject={subject} and content={body}"
checkpointer = InMemorySaver()
agent = create_agent(
...
interrupt_on={
"send_email": {"allowed_decisions": ("approve", "edit", "reject")},
"read_email": False,
}
)
),
)
config = {"configurable": {"thread_id": "hitl-demo"}}
paused = agent.invoke(
{"messages": ({"role": "user", "content": "Send an email to alice@example.com with subject 'Hi' and say hello."})},
config=config,
)
print("Paused state keys:", paused.keys())
Once the agent stops, you can restart it by sending a Command(resume=...) With decision.
resumed = agent.invoke(
Command(resume={"decisions": ({"type": "approve"})}),
config=config,
)
print(resumed("messages")(-1).content)
Rails middleware for PII
We can also add practical security layers to avoid publishing secret data, for example modifying emails, masking credit cards, and blocking API keys based on regex detectors.
from langchain.agents import create_agent
from langchain.agents.middleware import PIIMiddlewaredef echo(text: str) -> str:
"""Print text."""
return text
agent = create_agent(
model="gpt-5-mini",
tools=(echo),
middleware=(
PIIMiddleware("email", strategy="redact", apply_to_input=True),
PIIMiddleware("credit_card", strategy="mask", apply_to_input=True),
PIIMiddleware(
"api_key",
detector=r"sk-(a-zA-Z0-9){32}",
strategy="block",
apply_to_input=True,
),
),
)
out = agent.invoke({
"messages": ({
"role": "user",
"content": "Extract information from text: My email is john@example.com and card is 5105-1051-0510-5100"
})
})
print(out("messages")(-1).content)
Streaming: watching the agent step-by-step
In modern agentic apps it is often important to support a high-quality user experience and not make him or her wait for a long time for a response.
The code snippet below shows token/step streaming in “update” mode – useful for UI where you want answers to appear live.
from langchain.agents import create_agentdef rate_city(city: str) -> str:
"""Rate city mock tool."""
return f"The best city is {city}!"
agent = create_agent(model="gpt-5", tools=(rate_city))
for chunk in agent.stream(
{"messages": ({"role": "user", "content": "Is Poznań a nice ...Rate city and afterwards plan a trip to Poznań in 5 stages."})},
stream_mode="updates",
):
for step, data in chunk.items():
last = data("messages")(-1)
print(f"step: {step:>6} | type={type(last).__name__}")
try:
print("content_blocks:", last.content_blocks)
except Exception:
print("content:", getattr(last, "content", None))
MCP Server with FastMCP
This is the little “math” MCP server. This exposes the devices stdioTherefore an agent can call them external capabilities.
from fastmcp import FastMCPmcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"Add two numbers"
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"Multiply two numbers"
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
We can place the above code as external LLM tool Math_server.py which will be accessed through MCP.
Connecting to MCP from langchain and using the tool
This cell connects to the MCP server, imports its tools, and creates an agent that can solve mathematics by calling the MCP toolset.
import asyncio
import nest_asyncio
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain.agents import create_agentnest_asyncio.apply()
async def demo_mcp():
client = MultiServerMCPClient(
{
"math": {
"transport": "stdio",
"command": "python",
"args": ("math_server.py"),
},
}
)
tools = await client.get_tools()
agent = create_agent("gpt-5-mini", tools)
r1 = await agent.ainvoke({"messages": ({"role": "user", "content": "what's (3 + 5) x 12?"})})
print(r1("messages")(-1).content)
asyncio.run(demo_mcp())
If you look at the examples in this article, you can clearly see the direction:
- Agents become the default abstraction
- Tool-use becomes standardized (MCP)
- Quality + security moves closer to the core runtime (structured output, memory, middleware, guardrails).
Thank you for reading.
For more examples and explanations related to Langchain and the Langgraph library I invite you to visit once again article seriesFor more features related to Langchain v1,XX check out the official Langchain documentation,
Published via Towards AI
