Is this read for you? TL;DR of these 2.4K words:
3 patterns that signal you need more Product Discovery: Solution-first thinking, Misuse of A/B testing, and Shipping & forgetting.
They can reinforce each other in a cycle. Problem understanding helps you do better.
AI is going to make this more urgent. Product Discovery is going to prevent you from working very efficiently on very irrelevant problems today, and it will be the differentiator tomorrow. The time to start is now.
Here’s a bad week:
Hearing my non-tech-savvy friends say “I JUST WANT IT TO DO X” as they struggle to work their way through increasingly bloated websites and apps.
Two sprints worth of story points spent on a feature that ends up being ignored by users, but celebrated anyway because it was delivered on time.
Long and draining meetings refining solutions that excite nobody. Not even the people working on them.
Building digital products is a complex mess, and there are probably a hundred things at play that explain why things go the way they go.
I don’t pretend to know all of them, but I personally keep running into these destructive patterns which sometimes run in a cycle:
Solution-first thinking
The misuse of A/B testing
Shipping & forgetting
1. Solution-first thinking
Let’s say you’re the Product Manager of an e-commerce platform that sells concert tickets.
If you limit the things you can influence to just the changes on the platform itself, there are still a million things you can change that could potentially help you sell more concert tickets.
Join any meeting in any organisation and you’ll learn that the challenge is not coming up with these million things to change. There’s never a shortage on ideas. Never. Peoples natural reflex is to jump straight to the solutions.
It’s obvious when an org suffers from this, because they skip all the bits that have something to do with the user and their behaviour in their discussions, hypotheses and plans:
We see a drop in X, what can we do?
We expect that by building this, we will increase Y
The goal is to increase Z. Lets brainstorm some ideas to help us achieve that
What’s missing in all of these is what making a change or building a feature will do to user behaviour, which in turn, might impact (financial) metrics.
Ideas generally only have an impact if they remove or fix something that’s blocking users, or if they add something nice that pushes users to complete their goal.
People don’t care about:
new CTA colours, unless they’re actively looking for them but can’t find them due to contrast issues.
a new search bar, unless finding items via search is both something that they do and this helps them find what they like more effectively.
an AI-powered dashboard, unless summarised insights is something they need and this genuinely saves them time on that task.
You get the point. If you don’t add value for them, they’re not gonna care, which means no impact on your metrics.
Almost every idea is a means to an end if you think from the perspective of your users (which you should). The better you understand the end, the easier it is to find the right means.
That’s relevant because the challenge is not coming up with yet another idea, the challenge is filtering the right one from the endless stream of ideas. Discovery makes that easier for you.
Most ideas are solutions that try to solve a problem thats non-existing for your users, or don’t even pretend to solve a problem at all.
Basically this.
2. The misuse of A/B testing
So how do you decide which of all the ideas to pursue when there is no clearly defined problem to measure the effectivity of the idea as solution against? Just test it.
This is either played as diplomatic solution when teams can’t agree which idea to pursue, or when none of the proposed ideas seems the best. The problem in these two scenarios is the same. There is no notion about the underlying opportunity or problem.
This shifts the game from selecting the solution most fitting for the problem, to selecting the idea that seems the best, where the best for what? is unclear.
More efficient experimentation and delivery processes enabled us to just go and test all these ideas, and we’ll proudly call it a culture of experimentation while we’re at it. The A/B test velocity as indicator of how well we’re doing.
This is no problem if you’re in an organisation thats on the good side of The Experimentation Gap, when you have the luxury of everything that the scale you’re at brings. Traffic, sound statistics, plenty of smart people, your own testing platform, experimentation neatly embedded in the build process etc.
You’re probably like me though. Stuck on the other side of the gap, not having the luxury to A/B test bug fixes. Experimentation is expensive, you always wish you had more traffic, things break, statistics are shaky and it can take weeks if not months before you complete the idea to insight cycle.
If we’re in the same boat, you can’t disqualify yourself from doing upfront discovery work and say just test it to see if a random idea works. You’ll be too slow and burn through too many resources.
The other problem is that you get away with not knowing the problem behind the solution. You’ll find what works, and thats all that matters when A/B testing is misused like this. There’s no need to understand why something did / did not work if you’re only interested in seeing uplifts and pumping through the A/B test idea backlog as fast as you can. It perpetuates the cycle.
Don’t stop A/B testing though. It’s an amazing capability and can bring a ton of value if balanced and aligned with Discovery.
3. Ship & forget
So we’ve got a stream of solution-first ideas, of which some are just tested to see if they’ll stick.
That’s nice, but once shipped I found more often than not there’s hardly any time spent checking whether these features actually solved anything. That makes total sense because that was never the intention to begin with. Without problem understanding, there’s no idea about what it is that needs solving.
The success criteria was either: shipped on time, or when it was an A/B test: shipped on time with the statistical seal of approval. There is no real learning, because there never was a clearly defined problem to measure the effectivity of the solution against.
You’ll know whether the solution moved your outcome, but without knowing why, you’re missing out on a lot of useful feedback. Feedback that could’ve helped you replicate the win or avoid the loss next time.
Without that opportunity to tie it back to, solutions just become a list to work through as quickly as possible. This means we’re back at the 1st Solution-first thinking pattern. It’s incentivised to come up with more ideas as the system rewards pushing through them fast.
A symptom of this ship & forget issue is when A/B testing becomes a planning nuisance. You’re required to submit your topics for the next quarter while you’re still waiting for the results that shape your direction to come in. There’s such a strong emphasis on delivery that learning becomes an inconvenience.
I’m not saying to slow down; it’s nice to get the work done fast. It becomes a problem when getting the work done fast becomes a goal on its own and it starts hurting your effectivity.
The complete cycle, and breaking through doing better
Without problem understanding, you’ll get stuck in solution-first thinking. It’s hard to select the best idea when best for what is unclear.
You don’t have to decide though: just test everything. Unfortunately this is a luxury most orgs can’t really afford: it’s too slow and too expensive, but the real issue is that you falsely get away with not doing the discovery work.
By design, this makes the solutions a list to work through ASAP. There’s no time, need or incentive to check whether the thing you shipped solved anything. Done and on to the next. The cycle resets and requires more ideas.
Doing better with Discovery
Breaking through implies that this multi-faceted problem can be solved entirely which honestly, I haven’t seen done yet. It’s complex, context-dependant and sometimes really just not that big of a deal. It’s OK to just pump out features for a quarter sometimes. That’s just the messy reality in my experience.
Doing better is healthier and more realistic ask.You’ll find that the cycle runs much smoother with even a little bit of problem understanding that Discovery brings:
It becomes a whole lot easier to select the best idea, as they can now be ranked to how effective they appear to be in solving the identified problem, rather than ambiguity.
It not only increases the success rate of your experiments, it’s also not that big of a deal when an experiment doesn’t win. You’re now chipping away on a bigger problem and testing becomes more about learning instead of pumping velocity.
When you have a problem to solve instead of features to implement, you can’t really get away with just leaving it unsolved or not fully explored and just proceed to the next quarter.
Discovery doesn’t just find problems to solve; it also creates the accountability for teams to solve them. I.e. once you know the house is on fire you can’t really leave the flames unextinguished.
Why AI is only going to make this more urgent
If I try to look past the hype and try to understand what AI brings us in practice today and might bring us tomorrow, I only see an increased relevance of having problem understanding.
Today: working very efficiently on very irrelevant problems
Today, AI is already making a lot of things more accessible and easier for me. The whole idea-to-prototype cycle can now be done solo and in a matter of hours, instead of requiring help from a designer from the team (if available) and taking weeks.
With the rate things are going, I see no reason why it wouldn’t continue to make my life easier, take on more and more tasks (autonomously) and do them a lot more efficient than I could.
The emphasis here is on more efficient though, not more effective.
Source: Reddit
If you prompt nicely enough most LLMs will rave at whatever you throw at it. The shit on a stick is a silly meme example of course, but what is going on here applies to some extent to all your other interactions with AI.
It’s a powerful accelerator; the horse & carriage turned into a rocket. You still need to tell your rocket where to go though. You still need to provide proper input and judge yourself what from it’s generated output is right or not so right for you. If you don’t, you’re going to end up in the wrong direction very fast.
You’re going to work very efficiently on very irrelevant problems. If you’re already stuck in the cycle with solution-first thinking, AI will only accelerate its downward spiral.
Discovery is going to give you the fuel you need to make better use of AI’s acceleration effects. Deep problem understanding is going to be much more beneficial than yet another prompting guide. Because what’s far more important than how you formulate your question is deciding what question to ask in the first place. Shit in = shit out, shit in faster = shit out faster.
Tomorrow: discovery as the differentiator
We’ll solve the shit on a stick example eventually, though. Probably sooner than later.
LLMs will get better or this will be solved on a product / workflow level. If you’re going to use an AI landing page generator workflow, that workflow won’t let you get away with generating and using a terrible landing page that isn’t fully tailored to your needs. Quality guardrails will likely be built in.
So then what?
Source: Maggie Appleton
Maggie Appleton puts it nicely in her The Expanding Dark Forest and Generative AI article: higher standards, higher floors & ceilings. She listed this in the context of online content, but I think the principle applies here as well.
AI will help those currently stuck at terrible create something mediocre. As AI becomes more widely adopted, more people will be able to create something mediocre: higher floors. As standards of what we think is a nice experience rise, it’ll be harder to create something exceptional: higher ceilings.
Eventually, getting it done (fast) won’t be a guaranteed way to win. It’ll be hard to out-execute your competitor if everyone has equal execution power through AI.
The thing thats going to separate you from someone else who also has Loveable / whatever future AI workflow thats popular open in a separate tab, is how you’ll use it. Deep problem understanding and knowing what questions to ask (the input), will be the differentiator.
Tomorrow of course. Today we’re vibe coding shit on a stick.
Do the work
If you recognise anything from the cycle, it’s very likely you could benefit from doing more Product Discovery. The time to start is now.
With what AI is doing today and might do tomorrow, you’re either going to accelerate the cycle’s downward spiral, or you’ll break it, enable yourself to properly benefit from AI’s acceleration effects and build something exceptional.
The why is probably clear, if it wasn’t already. What might not be as clear is how. If you want to break the cycle, where do you start when you close this screen? What will you do?
There are no shortcuts to great things, and this is no exception. I don’t see any other way other than to do the work. You’ll have to learn and understand the ins and outs of Product Discovery.
That’s what I’ll help you with. First up: 5 concepts to understand 90% of the Product discovery frameworks (coming soon).