A customer’s request to tag an AI teammate in a Google Doc comment highlighted uncertainty about the product's actual behavior.
The speaker acknowledges not always knowing what the product can do, which inspired the talk.
This uncertainty is due both to the unpredictable nature of large language models (LLMs) and the limitless ways customers might use a flexible, agent-based product.
Evolving Product Management for LLM-based Products 03:07
Product management is undergoing a major shift due to the opacity and emergent behavior of AI-based systems.
Traditional product development assumed a well-understood foundation and clearly defined user boundaries—now both are unclear.
Products built atop LLMs have unknown capabilities, and open-ended user interfaces (like free textboxes) invite unpredictable uses.
Rethinking Feature Design: Affordances, Not Requirements 06:04
Shift focus from specifying exact requirements to outlining affordances—what the agent is allowed or able to do.
Product managers need to define building blocks and enable emergent behaviors rather than trying to predict every possible outcome.
Behaviors emerge unpredictably, so discovering functionality becomes an ongoing process.
Emergent Functionality and Communication Challenges 07:14
The job of product managers grows to include identifying new, unexpected capabilities as they arise in use.
It’s hard to communicate or specify emergent behaviors with traditional tools like PRDs or Figma.
Evals (evaluation frameworks) are used to test and measure the probabilistic outcomes of AI agents, such as whether an AI responds with the right tone or style.
Evals become a living specification for what the product does and can serve as an ongoing measure of agent performance.
Product people should engage with evals directly for better understanding and documentation of product behaviors.
Prototyping and “vibe coding” (experimenting quickly to see how an agent feels in real use) are critical because the right experience is hard to predict on paper.
User feedback often highlights unexpected annoyances or delights that specs would miss, reinforcing the need for fast iteration.
Testing AI agents is about discovering what they actually do, not just verifying pre-defined requirements.
Traditional bug tracking struggles to classify issues in probabilistic systems—distinctions between features and bugs blur.
Acceptable performance can be defined in probabilistic terms (e.g., 90% success on key behaviors), and thresholds in evals serve as new criteria for shipping.
Redefining Customer Communication and Collaboration 16:04
Traditional product management roles—visionary and honest broker—are harder to play when both current functionality and future direction are uncertain or seem unbelievable.
The most effective strategy is to position the customer relationship as co-inventing the future, setting expectations for shared discovery and experimentation.
Customers not ready for this collaborative, evolving process may not be the right fit at this stage.
The Future of Product Management with AI Agents 18:13
The speaker expresses excitement and surprise at the rate of emergent behaviors and improvements as models improve.
Product management and development disciplines will need to rapidly adapt and let go of many legacy methodologies.
Core product principles remain (e.g., listening to customers), but techniques and processes are being transformed by LLM-based and agent-driven products.