We start this tutorial by building a meta-reasoning agent that decides how to think before it thinks. Instead of applying the same reasoning process to every question, we design a system that evaluates complexity, chooses between fast inference, deep thought-chain reasoning, or device-based computation, and then adapts its behavior in real time. By examining each component, we understand how an intelligent agent can control its cognitive effort, balance speed and accuracy, and follow a strategy that suits the nature of the problem. By doing so, we experience a shift from reactive responding to strategic reasoning. check it out full code notebook,
import re
import time
import random
from typing import Dict, List, Tuple, Literal
from dataclasses import dataclass, field
@dataclass
class QueryAnalysis:
query: str
complexity: Literal("simple", "medium", "complex")
strategy: Literal("fast", "cot", "tool")
confidence: float
reasoning: str
execution_time: float = 0.0
success: bool = True
class MetaReasoningController:
def __init__(self):
self.query_history: List(QueryAnalysis) = ()
self.patterns = {
'math': r'(d+s*(+-*/)s*d+)|calculate|compute|sum|product',
'search': r'current|latest|news|today|who is|what is.*now',
'creative': r'write|poem|story|joke|imagine',
'logical': r'if.*then|because|therefore|prove|explain why',
'simple_fact': r'^(what|who|when|where) (is|are|was|were)',
}
def analyze_query(self, query: str) -> QueryAnalysis:
query_lower = query.lower()
has_math = bool(re.search(self.patterns('math'), query_lower))
needs_search = bool(re.search(self.patterns('search'), query_lower))
is_creative = bool(re.search(self.patterns('creative'), query_lower))
is_logical = bool(re.search(self.patterns('logical'), query_lower))
is_simple = bool(re.search(self.patterns('simple_fact'), query_lower))
word_count = len(query.split())
has_multiple_parts="?" in query(:-1) or ';' in query
if has_math:
complexity = "medium"
strategy = "tool"
reasoning = "Math detected - using calculator tool for accuracy"
confidence = 0.9
elif needs_search:
complexity = "medium"
strategy = "tool"
reasoning = "Current/dynamic info - needs search tool"
confidence = 0.85
elif is_simple and word_count < 10:
complexity = "simple"
strategy = "fast"
reasoning = "Simple factual query - fast retrieval sufficient"
confidence = 0.95
elif is_logical or has_multiple_parts or word_count > 30:
complexity = "complex"
strategy = "cot"
reasoning = "Complex reasoning required - using chain-of-thought"
confidence = 0.8
elif is_creative:
complexity = "medium"
strategy = "cot"
reasoning = "Creative task - chain-of-thought for idea generation"
confidence = 0.75
else:
complexity = "medium"
strategy = "cot"
reasoning = "Unclear complexity - defaulting to chain-of-thought"
confidence = 0.6
return QueryAnalysis(query, complexity, strategy, confidence, reasoning)
We have established core structures that allow our agent to analyze incoming queries. We define how we classify complexity, detect patterns, and decide reasoning strategies. As we build this foundation, we build the brain that determines how we think before we respond. check it out full code notebook,
class FastHeuristicEngine:
def __init__(self):
self.knowledge_base = {
'capital of france': 'Paris',
'capital of spain': 'Madrid',
'speed of light': '299,792,458 meters per second',
'boiling point of water': '100°C or 212°F at sea level',
}
def answer(self, query: str) -> str:
q = query.lower()
for k, v in self.knowledge_base.items():
if k in q:
return f"Answer: {v}"
if 'hello' in q or 'hi' in q:
return "Hello! How can I help you?"
return "Fast heuristic: No direct match found."
class ChainOfThoughtEngine:
def answer(self, query: str) -> str:
s = ()
s.append("Step 1: Understanding the question")
s.append(f" → The query asks about: {query(:50)}...")
s.append("nStep 2: Breaking down the problem")
if 'why' in query.lower():
s.append(" → This is a causal question requiring explanation")
s.append(" → Need to identify causes and effects")
elif 'how' in query.lower():
s.append(" → This is a procedural question")
s.append(" → Need to outline steps or mechanisms")
else:
s.append(" → Analyzing key concepts and relationships")
s.append("nStep 3: Synthesizing answer")
s.append(" → Combining insights from reasoning steps")
s.append("nStep 4: Final answer")
s.append(" → (Detailed response based on reasoning chain)")
return "n".join(s)
class ToolExecutor:
def calculate(self, expression: str) -> float:
m = re.search(r'(d+.?d*)s*((+-*/))s*(d+.?d*)', expression)
if m:
a, op, b = m.groups()
a, b = float(a), float(b)
ops = {
'+': lambda x, y: x + y,
'-': lambda x, y: x - y,
'*': lambda x, y: x * y,
'/': lambda x, y: x / y if y != 0 else float('inf'),
}
return ops(op)(a, b)
return None
def search(self, query: str) -> str:
return f"(Simulated search results for: {query})"
def execute(self, query: str, tool_type: str) -> str:
if tool_type == "calculator":
r = self.calculate(query)
if r is not None:
return f"Calculator result: {r}"
return "Could not parse mathematical expression"
elif tool_type == "search":
return self.search(query)
return "Tool execution completed"
We develop engines that actually do the thinking. We design a fast heuristic module for simple lookups, a chain-of-thought engine for deeper reasoning, and tool functions for calculation or search. As we implement these components, we prepare the agent to flexibly switch between different modes of intelligence. check it out full code notebook,
class MetaReasoningAgent:
def __init__(self):
self.controller = MetaReasoningController()
self.fast_engine = FastHeuristicEngine()
self.cot_engine = ChainOfThoughtEngine()
self.tool_executor = ToolExecutor()
self.stats = {
'fast': {'count': 0, 'total_time': 0},
'cot': {'count': 0, 'total_time': 0},
'tool': {'count': 0, 'total_time': 0},
}
def process_query(self, query: str, verbose: bool = True) -> str:
if verbose:
print("n" + "="*60)
print(f"QUERY: {query}")
print("="*60)
t0 = time.time()
analysis = self.controller.analyze_query(query)
if verbose:
print(f"n🧠 META-REASONING:")
print(f" Complexity: {analysis.complexity}")
print(f" Strategy: {analysis.strategy.upper()}")
print(f" Confidence: {analysis.confidence:.2%}")
print(f" Reasoning: {analysis.reasoning}")
print(f"n⚡ EXECUTING {analysis.strategy.upper()} STRATEGY...n")
if analysis.strategy == "fast":
resp = self.fast_engine.answer(query)
elif analysis.strategy == "cot":
resp = self.cot_engine.answer(query)
elif analysis.strategy == "tool":
if re.search(self.controller.patterns('math'), query.lower()):
resp = self.tool_executor.execute(query, "calculator")
else:
resp = self.tool_executor.execute(query, "search")
dt = time.time() - t0
analysis.execution_time = dt
self.stats(analysis.strategy)('count') += 1
self.stats(analysis.strategy)('total_time') += dt
self.controller.query_history.append(analysis)
if verbose:
print(resp)
print(f"n⏱️ Execution time: {dt:.4f}s")
return resp
def show_stats(self):
print("n" + "="*60)
print("AGENT PERFORMANCE STATISTICS")
print("="*60)
for s, d in self.stats.items():
if d('count') > 0:
avg = d('total_time') / d('count')
print(f"n{s.upper()} Strategy:")
print(f" Queries processed: {d('count')}")
print(f" Average time: {avg:.4f}s")
print("n" + "="*60)
We bring all the components together into one integrated agent. We streamline the flow from meta-reasoning to execution, track performance, and observe how each strategy behaves. As we run this system, we see our agents making, reasoning, and adopting decisions in real time. check it out full code notebook,
def run_tutorial():
print("""
META-REASONING AGENT TUTORIAL
"When Should I Think Hard vs Answer Fast?"
This agent demonstrates:
1. Fast vs deep vs tool-based reasoning
2. Choosing cognitive strategy
3. Adaptive intelligence
""")
agent = MetaReasoningAgent()
test_queries = (
"What is the capital of France?",
"Calculate 156 * 23",
"Why do birds migrate south for winter?",
"What is the latest news today?",
"Hello!",
"If all humans need oxygen and John is human, what can we conclude?",
)
for q in test_queries:
agent.process_query(q, verbose=True)
time.sleep(0.5)
agent.show_stats()
print("nTutorial complete!")
print("• Meta-reasoning chooses how to think")
print("• Different queries need different strategies")
print("• Smart agents adapt reasoning dynamicallyn")
We created a demo runner to demonstrate the agent’s capabilities. We give it a variety of questions and see how it chooses its strategy and generates responses. We experience the benefits of adaptive logic firsthand as we interact with it. check it out full code notebook,
if __name__ == "__main__":
run_tutorial()
We start the entire tutorial with a simple main block. We run demonstrations and observe the full meta-reasoning pipeline in action. As soon as we implement it, we complete the journey from design to a fully functioning adaptive agent.
In conclusion, we see how building meta-reasoning agents allows us to move beyond fixed-pattern responses and toward adaptive intelligence. We see how the agent analyzes each query, selects the most appropriate reasoning mode, and executes it efficiently, while tracking its own performance. By designing and experimenting with these components, we gain practical insights into how advanced agents can self-regulate their thinking, optimize effort, and produce better outcomes.
check it out full code notebookFeel free to check us out GitHub page for tutorials, code, and notebooksAlso, feel free to follow us Twitter And don’t forget to join us 100k+ ml subreddit and subscribe our newsletterwait! Are you on Telegram? Now you can also connect with us on Telegram.
Asif Razzaq Marktechpost Media Inc. Is the CEO of. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. Their most recent endeavor is the launch of MarketTechPost, an Artificial Intelligence media platform, known for its in-depth coverage of Machine Learning and Deep Learning news that is technically robust and easily understood by a wide audience. The platform boasts of over 2 million monthly views, which shows its popularity among the audience.
