Imagine you are prospecting for oil. If you want to maximize your long-term ROI, you don’t start by aggressively drilling the well. Instead, you explore the terrain. You learn about the potential of several oil deposits. You identify the biggest one and then optimize for extraction and revenue.

In the Lean Startup, Eric Ries urges startups to learn, not optimize. If product teams optimize too early, they often overlook the need to generate validated learnings about the long-term viability of the product or business. (“A startup has to measure progress against a high bar: evidence that a sustainable business can be built around its products or services,” he writes.) 

As with other aspects of the Lean Startup, this is easier said than done. Below, I’ve outlined a few tendencies that can lead to early optimization. Additionally, you’ll find some suggestions to break out of these patterns if you detect them. 

Sign #1: Believing that the growth of a success metric is proof of the right product 

If a product team believes that metric growth is tantamount to product validation, they tend to over-optimize. Proving that metrics can be driven is like proving that oil can be drilled. It delivers revenue, but it doesn’t say much about the long-term viability of the well. 

“Focusing on [revenue] exclusively can lead to failure as surely as ignoring it altogether,” Ries says. Yet, even well-intentioned learning-focused teams can forget this advice, since most teams are goaled on metrics, not learning. And the fear of running out of runway can further fuel the impulse to deliver consistent, small wins.

The more time a team spends on directly influencing a metric, the less time they are likely to spend validating the critical assumptions. This leads to missed opportunities and undiscovered risk. Consider Ries’s example where a company’s sales team compensates for a product that doesn’t scale. The revenue grows, but the product is customized for each customer, which isn’t sustainable in the long run. In this way, short term metric growth isn’t a reliable indicator of long-term success. 

Startup teams should think of metric targets as guides. The goal shouldn’t be to tactically grow metrics, but to learn about the drivers behind metrics. Instead of implementing tactics to improve activation, for example, teams should spend more time investigating what causes users to activate (and, of course, why). Instead of refining one activation flow, teams should split-test a variety of approaches. And instead of running safe, small experiments that drive incremental growth, teams should run bold experiments that address critical assumptions. This will lead to exponential growth in the long run. With each learning, teams better understand the levers that can be pulled to boost the metric. 

Sign #2: Ignoring the qualitative context around your success metrics 

When over-optimizing, teams often hyperfocus on a business metric without exploring the context around the business metric. As mentioned before, learnings are hard to quantify. But that doesn’t mean that qualitative context isn’t as valuable as quantifiable results. 

The goal of the Lean Startup is to produce validated learning as fast as possible, and with many new products, the fastest way to learn can be to talk to customers. New products are built in spaces of high ambiguity. It’s possible that the advantages and shortcomings of a product are not immediately quantifiable; it’s also possible that some success metrics might be misaligned with customer success (i.e. “vanity metrics”). And, depending on the circumstances, there also might not be enough traffic to run tons of statistically significant tests at once. Qualitative research can provide direct feedback around how a product works (or doesn’t work), which ultimately speeds up the rate at which teams can learn about new drivers of metric growth. 

It sounds a bit cliche at this point, but every effort to improve the metrics should be accompanied by qualitative validation to understand why the metric moves. Orient experiments around the assumptions that need to be validated in order to grow the metric, and talk to users to understand whether the assumptions are correct. Look for the patterns of behavior or a ha moments that lead users to invest their time in the product, or look for blockers that prevent them from accessing that value. By looking for the behavior behind the numbers, we can shed light on the mystery of why our products grow, which can lead to better long-term growth strategies. 

Sign #3: Your experiments are testing small assumptions

Since optimizing involves experimentation, it can be challenging to differentiate from a “validated learning.” But, when following The Lean Startup, teams need to test the most fundamental assumptions, not implementation-level assumptions.

Testing small assumptions can make us feel like we’re on the right track. We’re building, measuring and learning. But a learning goal such as “how do we improve this landing page to increase sign up?” leaves no room to question whether the landing page, or the product, is built on untested assumptions. 

A better learning objective might be to list core assumptions about why users sign up. If your product is a suite of education tools, you may assume that users are signing up for the entire platform, while they may only be interested in one feature within the platform. Instead of testing several versions of the landing page showcasing the whole platform, test a whole platform landing page against one with a specific feature within the platform. Better yet, interview customers who have signed up previously and learn about why they signed up. Use those patterns to drive better landing page variants, and then test those to find what works best. 

To determine whether an experiment is too small, map the chain of logic leading to each hypothesis. If the hypothesis is resting on a bigger, unvalidated assumption, then it’s probably too small for testing. In the landing page example, the question of how to improve sign ups rests on a solid understanding of what features and value propositions drive users to sign up. Experimenting with ways to improve one value prop  is a small optimization compared to a multi-value prop test. 

Sign #4: Relying on split tests to measure everything

With pressure to test every aspect of the product, it’s easy for teams to get caught up in small experiments. It may seem like more experiments = more learning, but even quick-to-deploy experiments aren’t free. In fact, the rate of learning slows when running the wrong experiments too frequently. 

Micro-testing worsens the delay because small changes in low traffic environments take a long time to reach significance. As new products have low sample sizes that tend to accompany new products, teams either have to wait weeks for significant results to return, or take on risk and make decisions on smaller sample sizes. This isn’t a strong position from a test rigor perspective or a validated learning perspective. 

Instead of relying only on split-testing to validate the details, it’s usually faster to refine a product by showing a design prototype to a few users (ahead of shipping them live). Save split tests for bigger concepts that can reach significance faster than smaller experiments. For example, a test with two versions of a landing page might have a higher minimum detectable effect than two versions of copy within the same landing page.

Wrapping Up / TL;DR

When building a new product, the long-term value of exploration outweighs the short term benefits of optimization. In the startup space, this can be especially tricky. We often need to subvert the impulse to demonstrate short-term success results to discover high-leverage solutions that ensure long-term growth. But as you spend less time optimizing and more time exploring, you’ll find that validated learning speed is relative to time and investment. This will allow you to build better products, faster.

About the speaker
David Prentice CollegeVine, Product Manager Contributor

David Prentice is a Product Manager who is happiest when actualizing ambitious visions, fine-tuning high-quality user experiences, streamlining complex interfaces into simple interfaces, learning by talking with customers, binge-building dashboards, collaborating with cross-functional teams, and shipping products that make a meaningful improvement in the lives of their users. He currently works at CollegeVine, an education startup dedicated to bringing high-quality college guidance to every family, and has led the creation of a new app to help students optimize their college choices. Prior to CollegeVine, he managed brand, platform, and research teams at two of the world’s largest online travel companies. PM-life aside, David is a music, art, and history nerd, who lives in Boston with his girlfriend and three cats.

Provide your rating for this post
If you liked this post, please use the buttons to the left to share it with a friend or post it on social media. Thank you!

Leave a Reply

Read more

Organize Your Product to Empower Your Users

CollegeVine Product Lead David Prentice shares how to balance product structures with micro and macro organization in order to empower users.

To Discover Hidden Customer Problems, Try Defining Solutions First

CollegeVine Product Lead David Prentice on how product managers can create better products by defining solutions through design ideation.

Sign-in / Register for Free

Don’t be left behind in your career. Join a growing community of over 500K Product professionals committed to building great products. Register for FREE today and get access to :

  • All eBooks
  • All Infographics
  • Product Award resources
  • Search for other members

Coming soon for members only: personalized content, engagement, and networking.