Product Management Strategy + Continuous Discovery Habits - A Guide
How to integrate your product management strategy with continuous discovery
Although product management and discovery can seem separate, in reality, product designers and researchers need to communicate with product managers and developers every single day.
If you’re all in the same team, then wouldn’t it be great if we could all focus on discovery then move to design and development? The truth is that discovery and development happen concurrently.
Quick author credit: We'd like to thank Michael Orland for first chatting to us about these concepts. He has a unique perspective into where continuous discovery habits meet your product roadmap, and how that connects to scalable revenue growth. Michael lives and breathes this subject as he was previously a Fellow at Seedcamp, Venture Partner at Entrepreneur First and the CRO at Songkick.
What do we mean by product management strategy?
Members of the startup world have a standard understanding of product management and product strategy. In order to be strategic about your product management however, you will need to take a birds-eye view of the problems you are trying to solve. Not just in your product team or squad, but across the business as a whole.
If you can zoom in on how to solve an individual’s pains, and zoom out to see how that aligns with the companies overall purpose and mission - you’re probably a strategic product manager. You're probably already a senior product manager or by implementing what you'll read below then you'll soon be well on your way.
The other facet of a strategic product manager is delivering reliable value-adds to the business. This isn’t just delivering product features (as you well know), it's testing assumptions via experiments, and failing.
If you aren’t failing regularly, you probably aren’t thinking widely enough about how to solve the problems your customers truly face. That is, you are playing it safe.
Playing it safe probably won't get you fired, but it will also hold you back from the sort of success that comes with controlling high risk and high reward situations.
So, strategic product managers tend to be bold and know how to challenge assumptions effectively. That is, by backing up leaning into cold hard data.
Who is this advice for? We’ve focused the advice here on a pre Series A SaaS startup so keep that in mind. We think this advice has the potential to saving the life of your startup, even though the framework and processes outlined below might simply be thrown out the window when a business leaves the ‘startup culture’ phase behind.
Tip: Get a blank page set up to take your own notes from each section below, tailored to your org and product:
How should you structure product experimentation?
You can’t just pull out a ‘dual track agile’ template and get your product squads together and say “right, let’s be more iterative, more cyclical, and build and launch more ideas” OR “let’s create little mini-waterfalls by doing discovery sprints” (as Marty Cagan of Silicon Valley Product Group says)
You have to be even more deliberate. Yes, sometimes you can adopt a new methodology and on the surface experience serendipity and ‘luck out’ with some high quality insights. Being deliberate will boost your chances though. After all, deliberate practice is the best form of practice.
In a world of data-driven product management however, you should look at optimising your chances of making your own luck and your chances of success.
That’s not to say you need to do everything in extremely controlled conditions as that’s just not practical. You just need to think about product experimentation more deliberately as a way of generating and validating your ideas in the fastest and cheapest way you can.
Before you can be great at this, you (and perhaps your whole team) have to be really honest about:
- How much you know
- How confident you are in knowing it (easier said than done)
Once you have mapped that out (here’s an assumption mapping template in Miro), then you can work to build product experimentation which will improve your confidence, or increase what you know more broadly.
Don’t confuse "deciding what to build" (experimentation/discovery) with actually "building it" (execution/development)
We love this diagram as it shows how you can run discovery and delivery tracks in parallel
What’s happening here:
- On the top track in Green - You’re trying to prove and disprove hypotheses
- On the bottom track in Blue - You are trying to ship product (bottom track, blue arrows)
- The middle line is punctured by the approved hypotheses which are ‘shunted’ down to build and deliver.
Some other assumptions are baked into this chart:
- You cannot predict the size of the confident suggestions, or the time it will take to discover them.
- You are following a development framework using fixed time periods (sprints) and each sprint ends with shippable product.
- The grey line represents the passage of time, in pursuit of a common purpose.
This common purpose is pitched as ‘measure and learn’ in another version of this framework below.
If you take just one powerful idea away from this article: Both streams must be united under a common objective (e.g. increase Daily Active Users / DAUs by 10%) in order to keep this as ‘dual track’ rather than ‘duel tracks’.
That is, dual elements of achieving the same objective, rather than two tracks duelling for time and resources over slightly different objectives or something vague like 'measure and learn'. The first two dual track diagrams felt like they could belong to multiple teams.
If we edit the above and simplify, we might have something that looks like this:
This simpler version is really to help visualise the work of just one team or squad (though it can work for more). I know what you might be thinking here, but yes, your one team can run discovery alongside delivery. I've seen this in practice at Simply Business, with teams coached by Teresa Torres herself. In fact, you'll find those examples in her book - Continuous Discovery Habits.
Discovery needs to have a mandate or objective which builds towards a strategic objective. That is, each hypothesis should be one possible way of reaching that joint mandate or objective.
Note: The other core concept to marry up with this is opportunity solution tree mapping (Miro template here). If you haven't come across this concept yet- make a quick note on your to do list to check it out, along with this accompanying article
Now, as all strategic product managers know, some hypothesis aren't going to work at all, e.g. Hypothesis 3 might feed nothing to the delivery line below it, or it might get you halfway to that 10% DAU increase you need.
If you had two teams it might look like this:
How do you set good product experimentation objectives?
Having a way to measure whether you’re successful is very important. Excuse the mathematical anecdote, but you can’t start a gradient descent without knowing which way is down and how to tell that you’re moving.
You have to be careful of optimising towards merely local minimums and maximums.
You must incorporate your overall understanding of the user, product, and company’s goals to fight this.
You also don’t necessarily want to purely optimise for one metric, as there are always incentives to game your own system.
Be aware of constraints. E.g. “We want to maximise median time on site using feature X but not reduce our monthly active user count by more than 5% (because you can very easily improve median use time by just deleting your least active user accounts!)
So you’ve got your objective, you’ve shipped product in accordance with discovery work and weeks go by, then months and maybe you’ve got just over half way to your goal but you need 10%. When do you call it quits?
How do you know when you’ve finished?
There are a few ways to do this, and there are no hard and fast rules of what is right and what’s wrong here as long as the business metrics are moving in the right direction!
So, you could
- Set a simple finish line (if you’re confident you can hit it)
- You can time box it (e.g. “get this number up as much as possible in one month”)
- You could have a hybrid approach (e.g. set a minimum target to achieve whatever it takes. Then timebox a stretch goal. This is useful if you’ve no idea how easy it’s going to be to blow past the minimum target)
- Just be honest when you’ve run out of ideas and opportunities to test and admit that you won’t finish - that’s OK!
Quick recap:
- You unite product discovery & product development with an overarching goal, usually tied to metrics
- You run a series of experiments to test which ideas for achieving that goal have the best chance of success
- You build the ideas that you’ve demonstrated will reach the goal, and measure against that objective
- You know when to call it quits, and move on to another mission
Remember, product discovery and experimentation just try to prove/disprove a hypothesis
You should always be looking for the least expensive way to prove or disprove it, so you can maximise your learnings (and therefore chances of success)
You need to be comfortable writing throwaway code so that you can test quickly and avoid tech debt
You need to remove the temptation to use your experimental code in production. Don’t get caught up in trying to make that code even 80% good. Be fast, be hacky, learn quickly.
Also - and to hammer this point home - don’t even write code if you can avoid it. Instead, you can:
- Survey or speak to users
- Use a no-code website builder (like webflow) and share it with the world (if you build it, do they come?)
- Use prototyping tools like proto.io or Figma.
- Analyse existing usage metrics/data
- Use dead links or false door techniques (using 404s as interest metrics)
Principles to keep in mind for assumption testing
- Scrappy ways of learning are always better than expensive ways of learning
- You need to weigh up the value of your possible learnings against damage to your brand or the user experience from the test
- Only once a hypothesis is proven can you build it ‘right’ with sustainability and scalability in mind
Allowing for serendipity in product experimentation
The clear objectives your teams have may also have the unintended consequence of them putting blinders on. Consequently, they may miss broader discovery opportunities.
So, while your discovery track is following an objective like the rest of the team, you don’t want to remove your ‘evergreen discovery and research track’.
The importance of ‘evergreen discovery’ tracks
The key thing is to have a continuous discovery work stream running along independently of product team objectives. This is how you maintain your chance of serendipitous discovery.
Some discovery doesn’t require a measurable product-driven objective like we have discussed already.
Evergreen discovery improves your understanding of the whole of your customer and their problem space. Evergreen discovery is ongoing user surveys, ethnographic research, personas refinement, live test sessions on your whole product and everything else already mentioned.
The aim of evergreen discovery is broad learnings, not tied to objectives.
This is where you can also learn from the normal lean-build process, like:
- How does this new development in [your industry] or [relevant government regulation], unlock a certain feature set for you?
- How does the consolidation of an industry or market segment affect your product strategy?
- What can you learn from the failure of other startups such as Atrium?
- What are you hearing from our sales and success teams? Several customers have been asking if they can use only a subset of our product for a discounted price, for example.
Exactly how your business does this is down to the business culture. How you do it doesn’t matter as long as you do it, and it works for you. Don’t let this fall through the cracks.
Some ideas are formal and structured, some are informal and ongoing. Whether it’s occasional off-sites with users, two remotely recorded user tests a week (with tools like usertesting.com) or quarterly testing weeks where you get your most loyal customers in and treat them like royalty - whatever you like, whatever works, just make sure it’s happening.
Structuring your product discovery results
People at your company need to know the difference between “this is a hypothesis we are testing” vs “this is now proven and we are building it”
This requires that you create clear and explicit communication around product discovery.
Everyone should know when you’re releasing something as a test vs a feature with proven value.
In pre-Series A companies, product managers, team leads (heck, maybe even the whole org) should know what all of each other's targets are and what parts of the product everyone is touching. You don’t want anyone working at cross purposes, or repeating previous experiments.
You can learn a lot from each other, and especially the people who have been at the startup the longest. They probably remember experiments which happened but were never documented, speaking of which - you still need to document that don't you? Get those experimental results out and into the open.
Doing so may prompt you to:
Revisit what you already ‘know’
Be aware that you aren’t solving static problems where you have ‘understanding level of 80%’ or so which stays there over time. As the market is always evolving and moving on, the confidence you have must always be decreasing if you have done nothing to revisit it.
Of course, you could and should review previous learnings periodically as part of your continuous discovery flow. How often you do this is probably a function of the rate at which your market changes. E.g. if you were in a crypto start-up you’d do this more frequently than if you were building CRM tools.
You have to keep learning about your moving target. The continuous-ness is the most important part.
Continuously review, structure and share your experimental results
So you’ve got your objective-led discovery streams, your evergreen discovery streams and your shared language and understanding.
Now you need to pencil in a background process (every 2 weeks, or every month or so) where you collect and review everything that you’ve learned.
It’s important to catalog, sort and order your
- hypotheses
- experimental results
- confidence levels
and this should all be discussed and processed separately from your product roadmap discussions.
In this way, your discovery and feature delivery streams are tightly interwoven (and tied to a joint objective) but can remain independent.
If you can keep all the experimental results in one place, then when you need to zoom out from the team objective for a meeting or two, that’s all there for you to discuss as a group.
Some companies refer to this centralized board of hypotheses, experimental results and confidence levels as “The Lab” which feels apt to us too.
Incorporating and responding to customer requests/feedback
How to be customer-led without outsourcing your roadmap.
This is super hard - and the next few paragraphs won't solve this for you, but they'll help.
Quick primer: Prospect theory teaches us
- People value concrete money in front of them more than extra money in the future. That is, they prefer certainty over risk, even if the expected outcome in the uncertain world is even greater.
- People value immediacy. In this world, people want things increasingly fast. There’s also time value of money, where someone feels that waiting for something is expensive action if they could solve it instantly elsewhere.
Combine the two and you can better understand this typical issue:
Customer A says they’ll only purchase or renew on the condition that you say you’ll build and ship feature X.
Product managers must then weigh up these sorts of cognitive bias and effect. You must weigh definite and immediate value (the customer paying you now) against an unpredictable and deferred reward (building things for prospective future customers that they might pay for)
How to inoculate yourself from over-relying on paying customer’s feedback
In practice it’s hard to work and rework an exhaustive decision tree every time.
A useful rubric here is to think about each new bit of customer information as “training” your mental model of customer needs and your confidence in how the product roadmap meets those needs.
Ask yourself the following rationally-aware questions when weighing up the customer feedback.
- Are they speaking for themselves alone, or have you just heard a sample from a large population?
- How important does it seem to be to them?
- Is this a fundamental change to the product or something minor?
The decisions then moves from considering a customer request in isolation to feeding that request into ‘The Lab’ and updating it if further discovery work deems it appropriate.
You must ensure that customer feedback goes through the same validation processes as the rest of your discovery work.
The request as a whole can then be weighted and perhaps factored in to the updated roadmap if it aligns with the company tactical and strategic objectives.
If you do decide to shift things around in the roadmap based on new information, then you’re making explicit trade-offs.
Again, what’s important is that you’re reviewing data coming into The Lab regularly.
At least one person should be accountable for carving out time to process the aggregated and structured feedback you’ve collected - and perhaps even reporting it back to the company by way of a weekly / fortnightly demo.
It’s hard to do this individually, but it’s also time consuming (and more expensive) to do with a larger group. You’ll need to find what’s best for you.
Whatever you do, just don’t combine that feedback process with roadmap adjustment. Post-process customer insights should be ready for a product roadmap discussion at any point.
Teams should be able to get together and say “we need to bump up this feature, but what can we drop back a time unit or two and how will that affect our existing users?”
Your product roadmap checkpoints are to check that your product strategy still aligns with your company goals overlaid with discovery feedback collected.
Let's dig into the customer feedback issues in a bit more detail:
Example 1 - Customer A - Big money
Customer A is midway through a free trial. They’re on the fence and ask if you would build a new feature, agreeing to convert to your paid product if you do. They’re a BIG potential customer, though at the fringe of your ideal customer segment (i.e. they're non-ICP).
Immediately you should have two questions.
- How representative is this customer of the segment we want to sell to?
- How confident are we that this new feature will actually be useful (had you already been thinking about it?)
Work through those two questions as inputs to your roadmap (the product of the two is some abstract notion of total value created), revise it, and figure out if building this new feature sets you on the right path or if you will be found guilty of just taking the short term revenue.
Unfortunately, sometimes you may have to simply take the revenue route!
Example 2 - Customer B - Early adopter
Customer B, one of your early adopters, complains about slow page load times and hints that they’re not going to renew when their contract is up for renewal in two months. They’re a medium-sized customer - you could afford to lose them but it wouldn’t be great. New sales are still going well and focusing on delivering additional features to close even more sales would compensate for this lost customer, and maybe even unlock new customer segments.
You know from testing the product yourself that site performance is annoying but you have technical debt which makes a fix too expensive to just fix it quickly.
So how much of an issue is this really?
At this point it’s all about assessing the long-term impact of performance and you get to play detective for a bit. Customer B has been with you for a while - does performance degrade over time (e.g. doesn’t scale well with a specific customer's corpus?) If so, how much of a retention problem is this?
Have previous customers churned because of this? Ask them! They may not have mentioned it at the time.
Do you see metrics being affected, e.g. a decrease in average session time for users as their requirements become more complex? Did you already have “performance improvements” in your roadmap but low down?
The results of this Watson & Holmes analysis may put performance nearer the top again, or you may decide that this issue is isolated to one customer at the ‘make or break level’ and / or their feedback isn’t really justified.
Two things make this detective work pretty annoying:
- Getting to an answer quickly
- While being thorough enough to assess the future impact (without being so exhaustive that you freeze up)
Good luck!
TL;DR
Intertwine feature delivery teams with continuous discovery habits, in order to deliver long-term improvements to the lives of your users, and your bottom line.
Put in regular sessions to weigh how the ideas spun out of inbound customer feedback, outbound insights and internal ideation align with your vision and company objectives.
Regularly report on both experimental results distinctly from the actual results of new features call this 'The Lab'.
Allow serendipitous discovery as well as objectively-focused discovery.
Hire great strategic product managers who intuitively understand this.
Since this article has been published we've had a few questions we've taken the time to answer and add below:
Questions
Q: How is product management typically done wrong in other companies?
A: Nobody goes to university and studies product management. You CAN do that for quite a lot of other disciplines that startups rely on (e.g. technical roles). In product management however, there are decades worth of titans producing content to learn from. Companies like Sales Impact Academy are also working on clear go-to-market training which is hugely exciting to see. The Product management discipline is more recognised and relied upon than ever before. In 2001 it was a weird thing and companies did not realise they needed it. It should be treated as a proper discipline where you study and learn and take it seriously.
The biggest mistake companies make is treating it as an ‘adjunct function’. “We know what we need to do, I’ve drawn it on the back of a napkin, we’ve got 6 months to launch this, can you get it done?” There’s so many ways to get product management wrong in a company, and that’s probably the greatest one.
Q: How much consideration should the discovery function have for the size of the hypothetical projects?
A: If it’s a whole new area, you may discover huge opportunities. If it’s a fairly well understood area - you may be confident that a few different opportunities are present and in different sizes. Part of the discovery process is not just understanding “is this going to move the needle I want to move in the direction I want to move it enough” You’re also thinking about feasibility. If you’ve gauged interest for something that’s impossible to deliver, that’s not that helpful.
The discovery process itself is like a little roadmap, so ideas need to be weighted and prioritised. You have to pre-filter the ideas you’re testing. Everything that gets shunted down to execute is pre-vetted as a size that’s practical. When executing things, it’s rare that things are easier than estimated to be to complete. It’s almost always harder (due to unknown unknowns).
Q: How do you draw the line between business model, hyper scalable feature which does the same thing for everyone, and super-bespoke features for individual customers. How do you choose when to adapt the business model?
A: You have to explicitly think about the roadmap and where the business is headed. This article tries to explain how you try to learn more about something specific. Occasionally you’ll break out of the product box. Maybe a customer is asking for something which is technically quite feasible, but totally undermines or shifts the business model.
That doesn’t mean that the PM should say “but it’s not too hard to do and it’s potentially decent money here” so let’s do it. Strategic business concerns should get involved and it should be escalated. They may agree “This is interesting as an isolated opportunity, but we aren’t sure it fits with our overall objectives”.
Q: Thinking like this means that every piece of customer-feedback always needs to be weighed up alongside the companies objectives. Doesn’t customer input clash with that regularly?
A: This article probably overstates how frequently this happens. It does happen though. A good tactic is to ensure that you’re just embedding company culture, mission and values implicitly in internal decision making processes. E.g. for Genie AI - Does this feature get us closer to open source law?
Q: It takes us a quarter to build a feature, so if the OKR is to build feature X to improve DAU by 10%, we can’t measure it until the next quarter. How could we remedy that?
A: Building the X is the means to the end. If the goal is to increase DAUs by 10%, the approximate goal is figuring out how to do that. When you set your OKRs, you’ve implicitly said that building feature X to improve DAUs by 10%. If you’re highly confident in that - go ahead and do it.
Prior to that though - how did you decide on feature X? Only in very rare cases would you skip the discovery process to identify feature X. This doesn’t mean you need to have a culture of unproductive scepticism, but you do need to be sure that feature X is the right thing to do. I would suggest that Q1 or time unit 1 is “Are we sure this is the right way to move this metric or are there easier faster cheaper ways?” In Q2 or time unit 2, then you can build it and measure.
Interested in joining our team? Explore career opportunities with us and be a part of the future of Legal AI.