February 6, 2018

It’s a New Age for Product Managers

Gaurav Hardikar

I had the pleasure of hearing Steven Sinofsky (responsible for leading Windows, Office, IE,, and Skydrive product at various parts of his MSFT career) speak at the first Products that Count Leading Teams Retreat a few years ago. We in the Bay Area sometimes forget that some of the first Product Management roles were constructed in Seattle by the company that dominated the Software Industry from the advent of the digital revolution—Microsoft.

When Steven spoke, I remember distinctly that the Product Management function at Microsoft was initially formed to bring order and prioritization to engineering tasks. I bring this up because it is genuinely hard to imagine this is where the product management function began, given that in the Bay we are taught that Product Managers are the “CEO” of the Product.

You have to ask yourself—how did this major transition in the role happen, and what has changed in the last 20 odd years between when the Office Product Unit was formed in 1994 to now? The obvious answer is the digital revolution, Web 2.0, and mobile apps that forced additional complexity and increased speed in software development, among many things. Here we see Product Management gained other responsibilities in addition to prioritization of development tasks.

But what else changed that turned the entire Product Management function on its head? As the data-driven leaders we are, we know this is due to the availability of both data and analysis.

In the absence of constantly flowing data and metrics, Product Managers in the past (and even as short as 5-10 years ago) were forced to make decisions on intuition, experience, and limited customer data or feedback. Compare that to today, especially in the consumer product world, it is a genuine risk to make a decision without an A/B test.

Data Age

In the new age of Product Managers, it is important to have your decisions be not only data-driven but data-sound in your decision making. Product Managers today are faced with data points and feedback from almost every source possible: consumer feedback (both qualitative and quantitative), clients, co-workers, and direct product metrics, and test data. To push data-sound decision making, you need to truly understand data framework concepts, learn how data is pulled, and thereby understand the process the data analyst performs (if you have one to work with).

Let me give you two scenarios to briefly think about (and these are real scenarios I’ve witnessed).

Scenario 1:

  • PM is working with Data Analyst, and they have launched a feature that is showing a +15% increase in conversion in the first week.
  • After 2 weeks of results, Data Analyst makes the call that the test is statistically significant with enough sample size, and gives the go-ahead to PM to release the variation to 100%.
  • Data Analyst notes to himself that there was a change in a specific user segment, but that was expected because it was the “whale” (power users) that grew conversion rates the most. He doesn’t mention to the PM as it feels immaterial to the go-ahead decision. The PM accepts it as is, and moves forward with pushing to 100%.
  • 2 weeks later, the temporary conversion rate metric has dropped again to usual rates. Both the PM and Data Analyst are surprised, and brush it off as what happens with A/B tests sometimes.
    In a random meeting with other stakeholders, the Product Marketing Manager notes that a marketing campaign for our power users just ended a few days ago. Suddenly, the PM’s eyes widen and realizes that this may have something to the conversion rate variance that was seen during and after the test.
  • PM immediately sends this to the data analyst… after which, the hypothesis is confirmed. The increase in conversion was related to a combination of the test variation and the marketing promotion for the product’s power users.

Scenario 2:

  • PM is working with Data Analyst, and they have launched a feature that is showing a +15% increase in conversion in the first week.
  • After 2 weeks of results, Data Analyst makes the call that the test is statistically significant with enough sample size, and shares the data results with the PM with the caveat that there was a noted change in a specific user segment, but that was expected because it was the “whale” (power users) that grew conversion rates the most.
  • The PM reviews the data and realizes that the change in this specific segment was no fluke. After a quick discussion with Marketing, the PM realizes this is because of a specific campaign that was targeted to power users.
  • The PM and Data Analyst decide to wait until the campaign ends in 1 more week before making a call on the experiment.
  • A few days after the marketing campaign, the PM and Data Analyst notice that the conversion continues to drop, and after 1 week it stabilizes around 1-2% increase.
  • The PM and Data Analyst make the call to end the experiment and go back to the drawing board with the rest of the team on why this marketing campaign was so effective with this specific segment group of users, and how to replicate it with a product feature.

What do you notice as the difference between the two scenarios? Part of it reflects a change in communication patterns between the Data Analyst and Product Manager, but it also speaks to a larger difference in expectation for what is the PM’s “role” vs the Data Analyst.

I always reflect back on my experience at Accenture Strategy and my first role at Trulia as a Product Analyst. There is a very big difference between a traditional Business Analysis and Data Analysis functions in the valley. Business Analysts are tasked to ask why and how and focus on impact to the business, while data analysts tend to focus on what and why, but specifically on the data frameworks and methods used to get to the end analysis. Ideally, what you want is both the PM and Data Analyst to collaborate to gather the what, why, and the how for the business impact while maintaining industry-level data integrity and analysis standards.

This seemingly small minute difference in what data-driven and data-sound represent is reflective of a larger industry trend where generalist PMs are making decisions from data analysis performed by Data Teams while lacking full insight into the why and how the data analysis is performed.

My advice – building a data-sound decision framework is as important as being data-driven in product management.

About the Author

Gaurav HardikarGaurav Hardikar is Director of Product Management at Shopkick, an omni-channel commerce and loyalty product that rewards users for their daily shopping habits. In his B2B2C role, he focuses on three main “consumers” – the users, Shopkick clients, and Shopkick as a business. This materializes in product ownership of all ad products, revenue, and the kicks earned side of the user journey. This means Gaurav is always thinking about how users can get “kicks” or rewards for purchasing specific items at everyday stores, as well as how each of these kick earning opportunities generate revenue for Shopkick and deliver ROI for brand and retail clients. With a background in Accenture Strategy, Trulia, and Zillow Group, Gaurav is passionate about delivering bottom-line growth while building consumer products that delight its users.

Products That Count is one of the largest communities of product managers, leaders and entrepreneurs in the world. It provides insider access to founders and C-level execs such as Netflix Product VP, Crossing the Chasm legendary author, Trulia Founder, or Lyft CMO, via speaker series, podcasts, and invite-only executive retreats. Partners include WeChat, Yelp, LeanPlum, Pragmatic Marketing, and StartupDigest. Its venture arm, Mighty Capital, invests in companies building products that count once they have demonstrated product/market fit.