I was working on a project for the Air Force of a certain European country to modernize a staff-management reporting system. When I sought access to the intended users, I had repeatedly been stymied. Eventually, someone took me aside to explain the reason.
The existing process was mind-bogglingly inefficient for a good reason: it provided jobs for lots of people doing their mandatory national service. These people had lucked into this office role, avoiding the typical experience of marching in the mud and having NCOs (Non-Commissioned Officers) yelling at them. Their primary goal was to complete their two-year commitment without jeopardizing their cushy office assignment.
Whether to Discard Data
Regrettably, user researchers sometimes face an agonizing decision about whether to exclude the data from a particular research participant. The following stories illustrate two schools of thought.
One guy started a usability session by picking up the mouse, holding it about 12 inches from his face, and squinting at it. It turned out that he had just come from an eye exam, and his eyes were fully dilated from eye drops. So I gave him his participant fee and told him that was all we needed for today.
I once had a participant who supposedly passed the screener that screened for people with a high level of computer literacy. He turned out to be a truck driver who had never used a desktop computer! He didn’t know how to use a mouse! What to do? Dismiss the participant or try to salvage the session? I’m a fan of salvaging whenever possible, so I ran the session. I’m glad I did because he gave us valuable feedback.
Surprise endings are not just for thrillers. Sometimes user-research sessions have them, too.
Slick or Usable?
A company once hired me to help choose the best software tool to buy for their employees. Working with some employees, we evaluated each candidate application and found that Product A was the most useful and usable—and Product C was the least. When I asked the employee evaluators which product they recommended, I was flabbergasted I heard their recommendation. They recommended Product C—the least usable and useful—because, they explained, it had the slickest-looking user interface.
Who’s the Expert?
Around the year 2000, I conducted usability testing to evaluate a tool that gave advice to IT administrators. The tool asked questions about their system configuration and offered advice on how to obtain better performance. Throughout the three-hour session, the participant was uniformly positive in his feedback. This was a little surprising because, around 2000, people did not generally believe that they could rely on computers to provide sound advice.
While chatting with the participant after the session, I said, “You seemed very positive, so I guess you’re looking forward to using this tool in your job.”
To my surprise, he replied: “Nah, I’m an expert who can do this task better than a computer can. Also, as an IT admin, I know computers are just dumb calculating machines, and I wouldn’t take advice from a calculator.”
This chance discussion cast all of the prior positive feedback in a very different light.
Fun fact—This tool was actually pretty smart. The people who developed it had earned PhDs and used it to set IT performance benchmark world records.
If you drew some profound lessons from these user stories, that’s great. If you had a good laugh, even better. I know there are many more great stories out there, so please share yours in the comments.
Acknowledgment—Several people contributed the stories in this article. Thanks to Bill Killam, Ilona Posner, Carl Zetie, Matt Belge, and others who wish to remain out of the limelight.