Are you making confident AI product decisions or quietly hoping your roadmap choices hold up a year from now?
As AI moves from experimentation to production-grade systems, product managers are increasingly accountable for decisions that will define their product strategy, technical debt, and competitive position for the next 12 to 24 months. In this episode, our Virtual Speaker Series features Auditoria.ai Director of Product Management Abhishek Vyas, who will unpack how top B2B product teams decide when and where to use machine learning, generative AI, and agentic AI, and how to avoid costly missteps along the way.
We’ll dive into:
• What every product manager and product leader must understand about AI to make defensible, long-term roadmap decisions
• A deep dive into agentic AI and real B2B use cases that are already delivering measurable business value
• Designing for user trust in agentic AI: when to automate, when to escalate through human-in-the-loop workflows
• How high-performing PMs communicate effectively with AI and ML engineers to avoid costly execution misalignment
• How to assess AI product risk, limitations, and tradeoffs before they surface as customer or compliance issues
You will leave with concrete decision frameworks, shared language for working with AI and ML teams, and clear guidance you can apply immediately to make and defend AI product decisions.
Join us for new conversations with leading product executives every week. Roll through the highlights of this week’s event below, then head on over to ourEvents page to see which product leaders will be joining us next week.
Show Notes:
- AI initiatives most often fail due to poor use-case fit, not because of the underlying model choice.
- AI is an umbrella of technologies; machine learning, generative AI, and agentic AI each serve fundamentally different problem types.
- Classic machine learning is best suited for structured data with clear labels, such as forecasting and predictive analytics.
- Generative AI excels at working with unstructured data, including text generation, summarization, and classification tasks.
- Agentic AI should be used when a system must pursue a goal through multi-step workflows using tools and APIs, often across multiple systems.
- Most real-world B2B products are hybrid systems that combine ML, GenAI, and agentic components rather than relying on a single approach.
- Agentic AI is fundamentally a system design challenge, not a model upgrade; traditional software engineering principles still apply.
- A typical agentic architecture includes orchestration, planning, tool execution, verification, policies, and guardrails working together.
- Agentic AI is most appropriate for workflows that are long, branching, and involve high context switching across systems like ERP, CRM, and ticketing tools.
- Non-deterministic user intent is a key signal for agentic AI, differentiating it from rule-based automation and RPA systems.
- ROI from agentic AI must come from real automation and decision execution, not just from drafting or summarizing content.
- Agentic systems must have bounded actions enforced by tools, policies, approvals, and human-in-the-loop mechanisms.
- If actions cannot be safely guardrailed or explained, the agent should not be shipped, as action failures are more dangerous than incorrect answers.
- Agentic AI should not be used for simple content generation or deterministic workflows that can be solved with GenAI or RPA.
- Product managers take on increased responsibility with agentic AI because systems act on behalf of users, not just advise them.
- Trust in agentic systems is incremental and should be built through an autonomy ladder, from read-only insights to supervised execution to limited autonomy.
- Clear escalation points are essential, especially for financial thresholds, irreversible actions, low-confidence outputs, and compliance-sensitive decisions.
- Auditability, observability, and telemetry are mandatory in B2B agentic systems to support compliance, cost tracking, and defensibility.
- PMs must define contracts, inputs, outputs, permissions, and evaluation criteria in close collaboration with AI and platform engineers.
- The core principle is to choose AI capabilities based on the problem, not the hype, and design trust, safety, and risk boundaries from day one.
About the speaker
About the host
Products that Count is a 501(c)3 nonprofit that helps everyone build great products. It celebrates product excellence through coveted Awards that inspire 500,000+ product managers and honor great products and the professionals responsible for their success. It accelerates the career and rise to the C-suite of >30% of all Product Managers globally by providing exceptional programming – including award-winning podcasts and popular newsletters – for free. It acts as a trusted advisor to all CPOs at Fortune 1000, and publishes key insights from innovative companies, like Capgemini, SoFi, and Amplitude, that turn product success into business success.