There are several metrics for measuring UX effectiveness, and your objectives and research methods typically determine which ones are relevant. The metrics measured are also based on the solutions inspired and supported by your research.
Adobe provides a well rounded “usability framework,” which is “a tool centered around a user who will interact with a particular product or a service in a specific context. It measures the usability of a product or service by capturing and creating benchmarks of success for effectiveness, efficiency, and satisfaction.”
Basically, is the intended goal accomplished, how easy was it to achieve the goal, and was the user happy with their experience? Let’s take a look at some common UX metrics that help determine the answers to these questions.
Examples of common UX metrics
The UX metrics listed below are just a handful of the ways you can measure UX effectiveness. We prepared several study examples, which include the types of reports and metrics generated with UXtweak’s tools.
The metrics for something like a card sort are likely different from the metrics you measure for a usability test, but there will also be similarities. For example, you may measure time on task for both types of studies, but only the usability test would require you to measure the task success rate (because there is no task success rate with a card sort).
With that in mind, we highly recommend browsing the demos and sample reports of them to get a better idea of the different types of UX metrics used for various types of research methods. Here are a couple examples of such demo studies:
Here are a few common metrics used to measure UX effectiveness:
According to David Travis and Philip Hodgson, authors of Think Like a UX Researcher, “Success rate is the most important usability measure with any website.”
In a usability study, you ask your participants to complete tasks. The tasks you create have a specific goal, a place on your website or mobile app where you want your user to end up. Your success rate is simply how many of your participants successfully completed the tasks versus those who didn’t. So if six out of 10 participants were successful, then your success rate is 60%.
A study by Jeff Sauro from 2011 found that 78% is an average success rate, so that’s a good place to start. If you’re below 78%, there’s a lot of room for improvement.
Here’s an example of the success rate statistics on our demo usability testing study. In this task participants were asked to choose and buy running shoes from an online store:
View the full sample results report.
Each task you create for a usability study has an ideal path from start to finish, or at least a path that your design team believes to be the best path. During your testing, your participants will make errors, like clicking the wrong button, not noticing a call to action, or anything else outside your intended path.
To determine your error rate, divide the total number of errors by the number of attempts. This is the calculation for tracking multiple errors across your study.
There is another calculation for if you are tracking a specific error and want to measure the error rate; you would divide the total amount of times that specific error occurred by the total number of opportunities for that error. Of course, either way, you want your error rate to be as low as possible.
The biggest benefit of online usability testing tools is that they calculate all this information automatically, allowing you to save some time on the analysis and focus on the results.
Here’s the example from the same demo study. These statistics are automatically generated in UXtweak for each completed study. Here you can see both the success and error rate for all tasks of the study. They are color coded for easy comprehension.
Time on task
Tracking the amount of time your participants spend on a task in a usability test is a good way of determining potential areas of confusion. If there’s a page on your website that the majority of your participants spent longer than necessary, then perhaps there’s a reason for it. Did they make any errors while on the page? Did they have any questions or voice any concerns?
For example, if one of your tasks is to log into an app, this task should take a few seconds. But you notice some of your participants taking too long on the page. Are they having trouble with the text fields? Are the login instructions unclear? Is the login button obvious enough?
There are several reasons why a task may take longer than expected and these might be exactly the usability flaws that need to be eliminated. The general goal is to have tasks completed quickly and successfully.
Among the other metrics inside UXtweak app, each task has a time on task statistics which looks like this:
System Usability Scale
John Brooke created the System Usability Scale (SUS) in 1986. It’s a questionnaire that you have your participants complete at the end of the usability test, right after they complete/attempt all the tasks.
The SUS consists of 10 questions that help determine a user’s opinion about the usability of a website or mobile app (SUS applies to other types of interfaces outside of websites and apps, but is most commonly used for these digital products in UX).
The average SUS score is 68. Any scores lower imply there is a lot of work to do and likely several design changes that will improve the overall UX. The idea is that you should constantly aim to improve your SUS score.
The adoption rate is the quantity of new users divided by your total users for a specific timeframe. For example, if you have 10 new users in a month and 100 total users, then your adoption rate for the month is 10%.
How does this relate to UX effectiveness?
Ideally, if you improve the UX, your adoption rate should improve, which leads to your total user count increasing. In theory, more users equals more revenue (or a higher valuation). But, it’s important to note that factors outside of your UX efforts impact adoption rate. For example, marketing campaigns have the power to significantly impact an adoption rate.
The retention rate is the percentage of users who continue using your website or app over a specific timeframe. To calculate the retention rate, you divide the number of users at the end of the timeframe by the number users at the beginning of the timeframe.
Like an adoption rate, many factors influence a retention rate, but the retention rate “usually correlates with the quality of the UX,” according to UXcam.
Why you should measure UX effectiveness
If you don’t use these metrics to measure UX effectiveness, then you won’t know if your UX efforts are worthwhile. For example, you have to know what your success rate is for a usability test in order to determine whether or not the changes you make as a result of that test actually improve the UX.
Comparing metrics is also a way to communicate your findings, goals, and other results with stakeholders. UX research and design cost money, so stakeholders want to know that the product or service is improving as a result of the UX efforts.
These metrics also help you and your team brainstorm solutions and dictate where you need to spend more time and resources. For example, if users spent too much time on the login page or even made some errors during the task, this should be an area of focus for the design phase. You communicate this by presenting your metrics, and then when the time comes to test again, you measure and compare to your previous metrics.
The importance of iteration in UX
This brings us to the importance of iteration in UX. There’s no way of determining UX effectiveness without going through the cycle repeatedly and retesting. If your success rate was 70%, that’s good to know, but you should be aiming to improve it.
According to Travis and Hodgson, “Most usability professionals would bet their house that they could make at least that (5%) improvement.” So in other words, aiming for a 5% improvement is the least you can do.
Iteration is important so that you can compare metrics and thereby determine your UX effectiveness, and there’s also the fact that you’ll likely never solve 100% of the problems your users face with just one or two design cycles. According to Nielsen Norman Group, conducting a usability study with five participants “lets you find almost as many usability problems as you’d find using many more test participants.”
This statistic is to help you avoid diminishing returns (spending too much time and money running tests to the point where you don’t learn or identify enough new issues). NNG says it’s better to run three rounds of usability tests with five participants rather than one big study with 15 participants. With the former, you are identifying the majority of issues and actively making changes.
All that said, five isn’t the magic number. Some research methods require more participants, it all depends on your objectives, the problem space, and the types of users. The point is that there is always room for improvement, so keep testing and making changes based on the measured metrics.
Learn more about how many participants you need for a UX research study.
Constantly improve your UX metrics
If you are constantly seeing improvement in your metrics then your UX effectiveness is high, and that’s exactly what you want. You want your success rate increasing and your time on task decreasing. Identify where your users are making errors, improve your SUS scores, and help grow your user base as much as possible.
Some stakeholders will ask you about their return on investment (ROI) when it comes to UX efforts, and they typically want to see hard numbers. So even if you’re conducting qualitative research, there’s always a way to communicate the UX effectiveness in a way that quantifies the improvements.
Register for your free account at UXtweak and let us help you make research easy!
The post What Metrics are Used to Measure UX Effectiveness? appeared first on Articles on everything UX: Research, Testing & Design.