How to Build a Contract-First Agent Decision System with PydenticAI for Risk-Aware, Policy-Compliant Enterprise AI

by
0 comments
How to Build a Contract-First Agent Decision System with PydenticAI for Risk-Aware, Policy-Compliant Enterprise AI

In this tutorial, we demonstrate how to use a contract-first agentic decision system PydenticAITreating structured schema as non-negotiable governance contracts rather than alternative output formats. We show how we define a strict decision model that encodes policy compliance, risk assessment, confidence calibration, and actionable next steps directly into the agent’s output schema. By combining Pydantic validators with Pydantic AI’s retry and self-correction mechanisms, we ensure that the agent cannot make logically inconsistent or non-compliant decisions. Throughout the workflow, we focus on building an enterprise-grade decision agent that reasons under constraints, making it suitable for real-world risk, compliance, and governance scenarios rather than a toy prompt-based demo. check it out full code here,

!pip -q install -U pydantic-ai pydantic openai nest_asyncio


import os
import time
import asyncio
import getpass
from dataclasses import dataclass
from typing import List, Literal


import nest_asyncio
nest_asyncio.apply()


from pydantic import BaseModel, Field, field_validator
from pydantic_ai import Agent
from pydantic_ai.models.openai import OpenAIChatModel
from pydantic_ai.providers.openai import OpenAIProvider


OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
   try:
       from google.colab import userdata
       OPENAI_API_KEY = userdata.get("OPENAI_API_KEY")
   except Exception:
       OPENAI_API_KEY = None
if not OPENAI_API_KEY:
   OPENAI_API_KEY = getpass.getpass("Enter OPENAI_API_KEY: ").strip()

We set up the execution environment by installing the required libraries and configuring asynchronous execution for Google Colab. We securely load the OpenAI API key and ensure the runtime is ready to handle async agent calls. This establishes a stable base for running a contract-first agent without environmental issues. check it out full code here,

class RiskItem(BaseModel):
   risk: str = Field(..., min_length=8)
   severity: Literal("low", "medium", "high")
   mitigation: str = Field(..., min_length=12)




class DecisionOutput(BaseModel):
   decision: Literal("approve", "approve_with_conditions", "reject")
   confidence: float = Field(..., ge=0.0, le=1.0)
   rationale: str = Field(..., min_length=80)
   identified_risks: List(RiskItem) = Field(..., min_length=2)
   compliance_passed: bool
   conditions: List(str) = Field(default_factory=list)
   next_steps: List(str) = Field(..., min_length=3)
   timestamp_unix: int = Field(default_factory=lambda: int(time.time()))


   @field_validator("confidence")
   @classmethod
   def confidence_vs_risk(cls, v, info):
       risks = info.data.get("identified_risks") or ()
       if any(r.severity == "high" for r in risks) and v > 0.70:
           raise ValueError("confidence too high given high-severity risks")
       return v


   @field_validator("decision")
   @classmethod
   def reject_if_non_compliant(cls, v, info):
       if info.data.get("compliance_passed") is False and v != "reject":
           raise ValueError("non-compliant decisions must be reject")
       return v


   @field_validator("conditions")
   @classmethod
   def conditions_required_for_conditional_approval(cls, v, info):
       d = info.data.get("decision")
       if d == "approve_with_conditions" and (not v or len(v) < 2):
           raise ValueError("approve_with_conditions requires at least 2 conditions")
       if d == "approve" and v:
           raise ValueError("approve must not include conditions")
       return v

We define the main decision contract using a strict pedantic model that accurately describes a valid decision. We encode logical constraints such as confidence-risk alignment, compliance-driven rejection, and conditional approval directly into the schema. This ensures that any agent output must satisfy business logic, not just syntactic structure. check it out full code here,

@dataclass
class DecisionContext:
   company_policy: str
   risk_threshold: float = 0.6




model = OpenAIChatModel(
   "gpt-5",
   provider=OpenAIProvider(api_key=OPENAI_API_KEY),
)


agent = Agent(
   model=model,
   deps_type=DecisionContext,
   output_type=DecisionOutput,
   system_prompt="""
You are a corporate decision analysis agent.
You must evaluate risk, compliance, and uncertainty.
All outputs must strictly satisfy the DecisionOutput schema.
"""
)

We inject the enterprise context via a typed dependency object and initialize the OpenAI-supported PydanticAI agent. We configure the agent to generate only structured decision outputs that conform to a predefined contract. This step formalizes the separation between business context and model logic. check it out full code here,

@agent.output_validator
def ensure_risk_quality(result: DecisionOutput) -> DecisionOutput:
   if len(result.identified_risks) < 2:
       raise ValueError("minimum two risks required")
   if not any(r.severity in ("medium", "high") for r in result.identified_risks):
       raise ValueError("at least one medium or high risk required")
   return result




@agent.output_validator
def enforce_policy_controls(result: DecisionOutput) -> DecisionOutput:
   policy = CURRENT_DEPS.company_policy.lower()
   text = (
       result.rationale
       + " ".join(result.next_steps)
       + " ".join(result.conditions)
   ).lower()
   if result.compliance_passed:
       if not any(k in text for k in ("encryption", "audit", "logging", "access control", "key management")):
           raise ValueError("missing concrete security controls")
   return result

We add output validators that act as governance checkpoints after the model has generated a response. When claiming compliance, we require agents to identify meaningful risks and clearly reference sound security controls. If these constraints are violated, we initiate an automatic retry to apply self-correction. check it out full code here,

async def run_decision():
   global CURRENT_DEPS
   CURRENT_DEPS = DecisionContext(
       company_policy=(
           "No deployment of systems handling personal data or transaction metadata "
           "without encryption, audit logging, and least-privilege access control."
       )
   )


   prompt = """
Decision request:
Deploy an AI-powered customer analytics dashboard using a third-party cloud vendor.
The system processes user behavior and transaction metadata.
Audit logging is not implemented and customer-managed keys are uncertain.
"""


   result = await agent.run(prompt, deps=CURRENT_DEPS)
   return result.output




decision = asyncio.run(run_decision())


from pprint import pprint
pprint(decision.model_dump())

We run the agent on a realistic decision request and capture valid structured output. We demonstrate how the agent evaluates risk, policy compliance, and confidence before making a final decision. This accomplishes an end-to-end contract-first decision workflow in a production-style setup.

Finally, we demonstrate how to move from free-form LLM outputs to governed, trusted decision systems using PydanticAI. We show that by enforcing hard contracts at the schema level, we can automatically align decisions with policy requirements, risk severity, and confidence realism without manual quick tuning. This approach allows us to create agents that fail safely, self-correct when constraints are violated, and produce auditable, structured outputs that downstream systems can rely on. Ultimately, we demonstrate that contract-first agent design enables us to deploy agent AI as a trusted decision layer within production and enterprise environments.


check it out full code hereAlso, feel free to follow us Twitter And don’t forget to join us 100k+ ml subreddit and subscribe our newsletterwait! Are you on Telegram? Now you can also connect with us on Telegram.


Asif Razzaq Marktechpost Media Inc. Is the CEO of. As a visionary entrepreneur and engineer, Asif is committed to harnessing the potential of Artificial Intelligence for social good. Their most recent endeavor is the launch of MarketTechPost, an Artificial Intelligence media platform, known for its in-depth coverage of Machine Learning and Deep Learning news that is technically robust and easily understood by a wide audience. The platform boasts of over 2 million monthly views, which shows its popularity among the audience.

Related Articles

Leave a Comment