Gut feeling is often not enough
RICE is one of the most popular prioritisation frameworks.
On the surface, it seems simple.
But when you dig deeper, it’s not so straightforward. While reach and effort are somewhat easy to calculate, how do you impact confidence? There’s no clear, mathematical formula.
Depending on your approach to assessing impact and confidence, RICE might be the most practical or downright dangerous way to prioritise solutions.
In this article, I’d like to tackle assessing confidence. I spent a few weeks researching the subject and experimenting with my approach to step up my prioritisation game.
Here’s what I discovered.
Given how hard it’s to estimate confidence empirically, many PMs resort to some sort of gut feeling.
Uninformed gut feeling
Probably the most dangerous way to estimate. PMs assess their confidence in an initiative based on their beliefs, past experiences, and often wishful thinking.
It’s where the “I just know it will be a great idea!” and “that sounds cool, but I just don’t think it’ll work” type of assessments come into place.
Don’t do that.
Informed gut feeling
A more empirical way to assess confidence is by judging how much data you have to support the concept.
Did someone propose an idea without any data to back it up? Then it’s low confidence. Do you have plenty of quantitative and qualitative data to back up the idea? Then it’s probably high confidence.
Using informed gut feelings to assess confidence is often a good enough approach, especially if you have to move fast and assess multiple ideas.
But if you want to truly step up your discovery game, you need a more robust framework.
Estimating confidence based on assumptions
Each solution or idea is based on a set of assumptions. Identifying assumptions is the first step toward building confidence.
To quickly identify basic assumptions, ask yourself two questions:
- What led you to even think about this solution?
- What must be true for this idea to be successful?
In most cases, you won’t be able to identify and list all underlying assumptions. There’ll always be some hidden assumptions you are missing at the moment.
But that’s okay. The goal isn’t to be precise. The purpose of estimating confidence is to provide solid decision-making support without overwhelming the team with analysis.
The purpose of estimating confidence is to provide solid decision-making support without overwhelming the team with analysis.
Once you have your assumptions listed out, assess their importance. How critical is a given assumption for the solution to be successful? The exact scoring is up to you. I usually use the following numbers:
- 1 = the assumption has a minor impact on the solution’s success
- 3 = the assumption has a decent impact on the solution’s success
- 5 = the assumption has a big impact on the solution’s success
- 8 or 13 = the assumption is imperative for the solution’s success
Once you have your core assumptions listed out, assess on a scale of 0–1 how confident you are the assumption is true. Here I use the following scoring:
- 0 = there’s nothing to support the assumption
- 0.25 = the assumption is based on experience or limited qualitative data
- 0.5 = there’s some empirical evidence to support the assumption
- 0.75 = there’s strong evidence to support the assumption
- 1 = there’s strong and statistically significant evidence to support the assumption
By multiplying the assumptions’ importance and level of confidence, you get an assumption score.
The solution confidence is a sum of assumptions scores divided by their total importance.
In our example, we reached a score of 6.5 for total importance of 11, which leads to a 59% solution confidence score.
So, the confidence that solution X will be successful is 59%.
59% confidence doesn’t mean there’s a 59% chance of achieving the desired outcome.
And 100% doesn’t mean the idea will be an unquestionable success.
There are two reasons why confidence scores can’t be taken at face value.
- There are most probably hidden assumptions you haven’t mapped.
- Both your assumption importance and validation scores aren’t scientific or precise.
However, you can use confidence as a relative comparison tool. If you have three potential solutions
- solution A with 40% confidence
- solution B with 70% confidence
- solution C with 80% confidence
then although you can’t say solution C has an 80% chance of success, you can say that there’s roughly a 2x higher chance it will work than solution A.
Treat it like story points. The goal isn’t to provide an accurate estimation but to be able to compare various ideas.
Although solution C might be twice as sure as solution A, it doesn’t automatically mean we should prioritise it.
What if solution A has a higher reach and impact and lower effort?
In such a case, it might be worth focusing on testing relevant assumptions for solution A to boost confidence.
If the confidence grows (you validated some assumptions), then solution A might turn out to be the most attractive one.
If the confidence falls (you discovered your core assumptions were wrong), then perhaps it’s time to ditch the solution altogether.
Problem with testing low-confidence solutions
There’s one additional problem with low-confidence solutions. It’s hard to isolate learnings from testing them.
Since one solution is usually based on multiple assumptions, and low confidence means these assumptions are not validated, if the solution doesn’t work, how will you know which assumption turned out wrong?
When trying out new ideas, make sure you either test assumptions first or, in the case of low-confidence solutions, design a test in a way that allows you to test assumptions somewhat separately.
Otherwise, you mind end up with test results that don’t indicate which assumption needs revisiting.
Testing solutions is easy. Learnings from tests are harder.
Wrap up
Confidence is a critical factor in product development. Although an informed gut feeling is often good enough, if you want to bring your discovery to the next level, you need a more robust approach.
Since the success of ideas depends on the correctness of assumptions we made, basing solution confidence on your confidence in underlying assumptions is the most natural way.
List assumptions, and assess their importance and level of confidence to get an assumption score. Dividing these assumptions’ total scores with their importance will give you solution confidence.
Solution confidence isn’t a scientific number but a relative comparison tool. Use it to compare and prioritise solutions.
Be careful when testing low-confidence solutions. If there are many untested assumptions, it might be hard to pinpoint which worked and which didn’t.
To maximise learning, test individual assumptions first and then test high-confidence ideas.