Five Skills I Actually Use Every Day as an AI PM (And How You Can Too) – O’Reilly

by
0 comments
Five Skills I Actually Use Every Day as an AI PM (And How You Can Too) - O'Reilly

This post appeared first on Aman Khan AI Product Playbook Republished here with permission from the newsletter and the author.

Let’s start with something honest. When people ask me “Should I become an AI PM?” I tell them they’re asking the wrong question.

Here’s what I learned: Becoming an AI PM doesn’t mean chasing a trendy job title. It’s about developing solid skills that make you more effective at building products in a world where AI touches everything.

Every Prime Minister is becoming an AI Prime Minister, whether they realize it or not. Fraud will be detected in your payment flow. Your search bar will make sense. Your customer support will have chatbots.

Underestimate AI Product Management Or And instead of this one more And. For example: AI x Health Tech PM or AI x Fintech PM.

Five Skills I Actually Use Every Day

This post is from a conversation with Akash Gupta Growth Podcast. you can find the episode Here.

After ~9 years of building AI products (of which the last three years have been entirely ramped-up using LLMs and agents), here are the skills I use consistently – not the ones that sound good in a blog post, but the ones I used literally yesterday.

  • ai prototype
  • Observability, similar to telemetry
  • Development of AI: New PRD for AI PM
  • RAG vs fine-tuning vs prompt engineering
  • Working with AI Engineers

1. Prototyping: Why I Code Every Week

Last month, our design team spent two weeks creating beautiful mocks for the AI ​​agent interface. It seemed perfect. I then spent 30 minutes creating a functional prototype in Cursor, and we immediately discovered three fundamental UX problems that weren’t revealed in the mocks.

Skill: Using AI-powered coding tools to create rough prototypes.
Resource:cursor. (This is VS Code but you can describe what you want in plain English.)
why it matters: It is impossible to understand AI behavior from static mocks.

How to start this week:

  1. Download cursor.
  2. Make something stupid simple. (I started with a personal website landing page.)
  3. Show it to an engineer and ask what you did wrong.
  4. Repeat.

You are not trying to become an engineer. You are trying to understand the odds and possibilities.

2. Observability: Debugging the black box

Observability is how you actually peek under the hood and see how your agent is working.

Skill: Using traces to understand what your AI actually did.
Resource: Any APM that supports LLM tracing. (We use our own at Aries, but there are several.)
why it matters: “AI is broken” is not actionable. “Reference retrieval returned an incorrect document”.

Your first observation exercise:

  1. Pick any AI product you use every day.
  2. Try to trigger an edge case or error.
  3. Write down what you think went wrong internally.
  4. This mental model building is 80% skill.

3. Evaluation: Your new definition of “done”

Vibe coding works if you’re shipping prototypes. This doesn’t really work if you’re shipping production code.

Skill:Translating subjective quality into measurable metrics.
Resource: Start with spreadsheets, move up to a proper evaluation framework.
why it matters:You can’t improve what you can’t measure.

Create your first eval:

  1. Choose a quality dimension (brevity, friendliness, accuracy).
  2. Create 20 examples of good and bad. Label them “functional” or “concise.”
  3. Score your current system. Set a goal: 85% of responses should be “just right.”
  4. That number is now your new North Star. Keep repeating until you hit it.

4. Technical Intuition: Knowing Your Options

Prompt Engineering (1 day): Add brand voice guidelines to system prompts.

Few-shot examples (3 days): Include examples of on-brand responses.

RAG with Style Guide (1 week): Get it from our actual brand document.

Fine-tuning (1 month): Train a model on our support transcripts.

Each has different costs, timelines, and trade-offs. My job is to know who to recommend.

Building intuition without building models:

  1. When you see an AI feature you like, write about three ways they might have created it.
  2. Ask an AI engineer if you’re right.
  3. Mistakes teach you more than correct ones.

5. New PM-Engineer partnership

The biggest change? How I work with engineers.

The old way: I write the requirements. They build it. We test it. ship.

New approach: We label the training data together. We define success metrics together. We debug failures together. Together we achieve results.

Last month, I spent two hours with an engineer asking whether the responses were “helpful” or not. We disagreed on a lot of them. This taught me that I needed to start collaborating on development with my AI engineers.

Start collaborating differently:

  • Next feature: Ask to join a model evaluation session.
  • Offer assistance in labeling test data.
  • Share customer feedback in the context of Eval metrics.
  • Celebrate eval improvements like you would celebrate feature launches.

Your four-week transition plan

Week 1: Tool Setup

  • Install cursor.
  • Get access to your company’s LLM playground.
  • Find out where your AI logs/traces live.
  • Build a small prototype (it took me three hours to build my first prototype).

Week 2: Overview

  • Find five AI interactions in the products you use.
  • Document what you think happened versus what actually happened.
  • Share findings with an AI engineer for feedback.

Week 3: Measurements

  • Create your first 20-example eval set.
  • Score an existing facility.
  • Propose an improvement based on the points.

Week 4: Collaboration

  • Join the Engineering Model Review.
  • Volunteer to label 50 examples.
  • Frame your next feature request as eval criteria.

Week 5: Recap

  • Take your learnings from prototyping and build these learnings into a production proposal.
  • Set the bar with evals.
  • Use your AI intuition for iteration—which knobs should you turn?

inconvenient truth

Here’s what I wish someone had told me three years ago: You’ll feel like a newbie again. After years of being the expert in the room, you’ll be the one asking the basic questions. This is exactly where you need to be.

The PMs who succeed in AI are the ones who are comfortable with being uncomfortable. They’re the ones who create bad prototypes, ask “dumb” questions, and treat every confusing model output as a learning opportunity.

start this week

Don’t wait for the right course, the ideal role, or the AI ​​to be “fixed.” The skills you need are practical, learnable and immediately applicable.

Pick one thing from this post, commit to doing it this week, and then tell someone what you learned. This way you’ll start sharpening your own feedback loop for AI product management.

The gap between Prime Ministers talking about AI and Prime Ministers building with AI is smaller than you might think. It is measured not by years of study, but by hours of practical practice.

see you on the other side.

Related Articles

Leave a Comment