Why we should measure research (even if no one asks us to).
It’s easy as a user researcher, designer, or PM to fall into the trap of simply always wanting to learn more about users. But what is the actual value of all our research?
At the end of a work week, when I can say, “I really impressed that team with those insights!” it sometimes feels like enough.
The true value of user research, however, is the influence it has on the work of others, the user’s experience, and the bottom line.
If we don’t track the ways in which user research has impact, we risk not knowing whether we had impact at all.
If research doesn’t have impact today, isn’t it better to know — and fix it? If some research does have impact today, surely we want to replicate that success.
When information gathered in research has an influence on a person, product, strategy, or even the full team, we can say we had impact.
For example, tracking user research impact should show how research study X led to design change Y, which positively impacted metric Z.
However, research can also have an impact beyond business metrics and design changes. That’s where it gets tricky. We need a way to track and measure these other effects as well.
How might we show that specific research work helped our stakeholders do their jobs better?
How might we track the impact of exploratory studies that may only be acted on a year from now?
I’ve worked with a framework partly based on that of Victoria Sosik, Director of UX Research at Verizon.
In a talk at UXRConf, Sosik laid out her framework, which has three parts:
- The research activity that drove the impact
- The impact or the recordable instance of influence
- The scale of the impact
Sosik and team track eight types of impact. But eight different impact forms can feel like a ton to keep in mind when a team is completely new to this.
Tao Dong at Google uses an even simpler framework that focuses on defining which of three levels of impact the research had.
I’ve tested out a few methods and framework tweaks over the years. As a starting point with teams who want to track their user research impact for the first time, I try to keep things minimal but also don’t want to cut out essential parts.
In my experience, there are four forms of impact (that match some of Sosik’s) that have the greatest overall value.
I chose these four given that most companies want to see research impact on the bottom line and team processes, if nothing else.
The four essential impact types for starting to track user research impact:
- Influence on product decisions (includes design changes and strategy updates)
- Research shared in communications (mentions of research which can also increase stakeholder exposure)
- Requests for collaboration (can improve research visibility and improve rigor of insights work where collaborations between data, CX and research are increasingly prioritised)
- Development of UXR infrastructure (not simply for spreading research team’s dogma, but for speeding up work and decisions across the organization)
Teams I work with have little to no extra time for implementing new, complex processes. Other researchers may choose to prioritize the visibility and funding of user research works as key forms of impact. I’ve intentionally excluded these, as I believe they often naturally happen as a result of focusing on these four impact types.
1. Decide where Impact Tracking lives
My preferred location is in a spreadsheet so that all impacts recorded are saved in the same place. Like a lot of user researchers these days, I’m a fan of using tables in Notion for this. Google Sheets, Airtable, or any other spreadsheet platform will work.
When all impacts are logged in a spreadsheet, we can skim the whole and quickly get a sense of whether we are mostly having one specific type of impact, or impacting one team, but not others. We get a great overview to analyze and act on.
2. Record the Impact
Describe the impact: what happened, where did it happen, and are there any metrics to show off? A few examples from my work saved in my Notion tracker are below.
When we can identify exactly what our impact was, we make it possible to replicate the success for more future impact. If we have a hard time identifying impact, we can also evaluate whether similar research is worth doing in the future, or whether we might need to shift our methods.
3. Note the Research Source and Type
Document the source of the impact: the name of the research activity completed, and which type of activity it was. This could be delivering a summary of a study, and specifically in a workshop or a meeting. It could be that I shared a single insight with the team in a Slack channel.
Ex: “Presented Target Audience insights summary in Workshop I with full project team.”
Categorize the research type using one of the following labels: evaluative research, generative/exploratory research, or iterative research.
Documenting which research study, presentation, single shared insight or workshop had an effect can help us replicate successful work in the future and even see if the format has an effect.
4. Determine the type of impact
This is where I’ll choose from my short-list of the four prioritized impact types:
- Influence on product decisions
- Research is shared in communications
- Prompts for collaborations
- Development of UXR infrastructure
Noting which forms of influence we’ve had helps us check in and see if we’re having the influence we intend to have. If we see that most influence shows up in prompts for collaboration but not on product decisions, we might need to shift how we work in the coming quarter to have what we believe is greater impact value.
5. Add an impact metric or some form of “proof”
I keep a column in my user research impact tracker where I can add a metric if there is one. It’s important to choose a metric that could help us decide which of our work is most valuable, worth continuing or eliminating.
Carrie Boyd on UserInterviews.com recommends using stakeholder interviews as one way to determine which metrics are most valuable for measuring research work.
Sometimes metrics aren’t immediately obvious. When a stakeholder told me that one insight changed her decision to prioritize a new feature, there was no clear metric there. But I decided to use a Likert scale to see how she would rate her experience: “On a scale of 1–7, how quickly were you able to make that decision after hearing that insight?” (Where 1 was not quickly and 7 was quickly).
My stakehlder rated her decision-making speed a 6/7 from that one insight. I continued to use this rating with other stakeholders and studies to track how certain types of research impacted decision-making speed.
In many cases, for lack of something more concrete, I use a quote from a stakeholder that I can refer to later to check whether a project had value.
6. Note the experience area or team area where the impact occurred
Record which team or product area received the impact. This could be a product flow, such as “Signup,” or the Growth team.
Noting specifically which team, squad, PM, or feature lead felt the impact enables us to follow up at a later date and check whether further impact was felt beyond the single recorded incident.
We also see over time whether we’re having impact across the organization, or getting siloed in one particular area.
7. Note the impact’s scale
We want to document where, and on which level, the impact was felt. When a team is first starting to track impact, I believe the most valuable forms to track are internal and team-related, so I narrowed my list to the following:
Similarly to Step 6, tracking on which level impact is observed helps the research team ensure that impact is valuable and widespread, not limited to a specific team repeatedly receiving the benefits. Seeing that we only have impact on a squad or team level may mean we should to prioritize insights work for management strategy in the future in order to provide even more value.
We often prioritize business metrics, like conversion, when we think of the impact insights work should have. However, the impact that insights can have on stakeholders’ decision-making speed, confidence in decisions, work processes, collaboration, and shared company-wide understanding of users can be just as valuable.
This framework is a starting point for teams eager to start tracking their own value, and make changes to increase it. With this framwork as a starting point, we can soon highlight where research has most impact, start spreading impact farther, replicate successful processes, prioritize the right research, and increase internal trust in the research team.
Over time, any team should evaluate and iterate based on their unique needs and patterns to find the framework that works best for them. You might find that you need more impact types, want to prioritize specific outcomes like increased funding for user research, or track new variables.
I don’t often tell teams to start tracking the dates of research impact compared to dates of research impact sources shared. However, as you progress, get used to tracking, and want to automate follow-up with teams, it can be helpful to track how long it takes to see impact from different forms of research.
Feel free to use this framework and my User Research Impact Tracker template in Notion. As with all things in insights work, it can be helpful to see this as an experiment to learn from and iterate as you go.