When Steven spoke, I remember distinctly that the Product Management function at Microsoft was initially formed to bring order and prioritization to engineering tasks. I bring this up because it is genuinely hard to imagine this is where the product management function began, given that in the Bay we are taught that Product Managers are the “CEO” of the Product.

You have to ask yourself—how did this major transition in the role happen, and what has changed in the last 20 odd years between when the Office Product Unit was formed in 1994 to now? The obvious answer is the digital revolution, Web 2.0, and mobile apps that forced additional complexity and increased speed in software development, among many things. Here we see Product Management gained other responsibilities in addition to prioritization of development tasks.

But what else changed that turned the entire Product Management function on its head? As the data-driven leaders we are, we know this is due to the availability of both data and analysis.

In the absence of constantly flowing data and metrics, Product Managers in the past (and even as short as 5-10 years ago) were forced to make decisions on intuition, experience, and limited customer data or feedback. Compare that to today, especially in the consumer product world, it is a genuine risk to make a decision without an A/B test.

Data Age

In the new age of Product Managers, it is important to have your decisions be not only data-driven but data-sound in your decision making. Product Managers today are faced with data points and feedback from almost every source possible: consumer feedback (both qualitative and quantitative), clients, co-workers, and direct product metrics, and test data. To push data-sound decision making, you need to truly understand data framework concepts, learn how data is pulled, and thereby understand the process the data analyst performs (if you have one to work with).

Let me give you two scenarios to briefly think about (and these are real scenarios I’ve witnessed).

Scenario 1:

  • PM is working with Data Analyst, and they have launched a feature that is showing a +15% increase in conversion in the first week.
  • After 2 weeks of results, Data Analyst makes the call that the test is statistically significant with enough sample size, and gives the go-ahead to PM to release the variation to 100%.
  • Data Analyst notes to himself that there was a change in a specific user segment, but that was expected because it was the “whale” (power users) that grew conversion rates the most. He doesn’t mention to the PM as it feels immaterial to the go-ahead decision. The PM accepts it as is, and moves forward with pushing to 100%.
  • 2 weeks later, the temporary conversion rate metric has dropped again to usual rates. Both the PM and Data Analyst are surprised, and brush it off as what happens with A/B tests sometimes.
    In a random meeting with other stakeholders, the Product Marketing Manager notes that a marketing campaign for our power users just ended a few days ago. Suddenly, the PM’s eyes widen and realizes that this may have something to the conversion rate variance that was seen during and after the test.
  • PM immediately sends this to the data analyst… after which, the hypothesis is confirmed. The increase in conversion was related to a combination of the test variation and the marketing promotion for the product’s power users.

Scenario 2:

  • PM is working with Data Analyst, and they have launched a feature that is showing a +15% increase in conversion in the first week.
  • After 2 weeks of results, Data Analyst makes the call that the test is statistically significant with enough sample size, and shares the data results with the PM with the caveat that there was a noted change in a specific user segment, but that was expected because it was the “whale” (power users) that grew conversion rates the most.
  • The PM reviews the data and realizes that the change in this specific segment was no fluke. After a quick discussion with Marketing, the PM realizes this is because of a specific campaign that was targeted to power users.
  • The PM and Data Analyst decide to wait until the campaign ends in 1 more week before making a call on the experiment.
  • A few days after the marketing campaign, the PM and Data Analyst notice that the conversion continues to drop, and after 1 week it stabilizes around 1-2% increase.
  • The PM and Data Analyst make the call to end the experiment and go back to the drawing board with the rest of the team on why this marketing campaign was so effective with this specific segment group of users, and how to replicate it with a product feature.

What do you notice as the difference between the two scenarios? Part of it reflects a change in communication patterns between the Data Analyst and Product Manager, but it also speaks to a larger difference in expectation for what is the PM’s “role” vs the Data Analyst.

I always reflect back on my experience at Accenture Strategy and my first role at Trulia as a Product Analyst. There is a very big difference between a traditional Business Analysis and Data Analysis functions in the valley. Business Analysts are tasked to ask why and how and focus on impact to the business, while data analysts tend to focus on what and why, but specifically on the data frameworks and methods used to get to the end analysis. Ideally, what you want is both the PM and Data Analyst to collaborate to gather the what, why, and the how for the business impact while maintaining industry-level data integrity and analysis standards.

This seemingly small minute difference in what data-driven and data-sound represent is reflective of a larger industry trend where generalist PMs are making decisions from data analysis performed by Data Teams while lacking full insight into the why and how the data analysis is performed.

My advice – building a data-sound decision framework is as important as being data-driven in product management.

About the speaker
Gaurav Hardikar Brilliant, Director of Product Management and UX Member

Gaurav is a Product Leader with more than 5 years of experience across all aspects of Product Management, Design, and Strategy. He is passionate about companies with a mission to better people’s lives in a tangible way, and is a large proponent of product management through collaboration and emotional intelligence.

Provide your rating for this post
If you liked this post, please use the buttons to the left to share it with a friend or post it on social media. Thank you!

Leave a Reply

Read more

Behavioral Economist Kristen Berman on Using the Insights of Irrationality

Behavioral Economist and co-founder of Dan Ariely's Irrational Labs, Kristen Berman shares examples of using insights to build great products

WeChat partners with Products That Count

WeChat is kinda the Facebook of China: an addictive messaging platform like we've never seen in the West with 800M active users

Growing Pains: The Challenges of Scaling a Business

Diane Pierson discusses why startups have issues scaling and why she thinks this has confused both established businesses and startups.

/ Register for Free

Don’t be left behind in your career. Join a growing community of over 500K Product professionals committed to building great products. Register for FREE today and get access to :

  • All eBooks
  • All Infographics
  • Product Award resources
  • Search for other members

Coming soon for members only: personalized content, engagement, and networking.