Anyone in the UX field who’s worked for a few companies will recognize a type of moderated research that gives off a reek of inauthenticity. Tell me if this sounds familiar: one moderator and six users sit around a table in a converted meeting room. The moderator tells the users, each of whom have been prescheduled and screened through a recruiting agency, to go to a prototype website and pretend they’re looking for a 20 GB googlydooter, or whatever. The users go into their cubicles, where the prototype is brought up on six identical, factory-default computers. Some of the users finish in five minutes, some don’t finish at all, but everyone gets exactly fifteen minutes to finish their task. (The early finishers drum their fingers in boredom, waiting for the moderator to call time.) Finally, the moderator brings up a projection of the prototype, and asks the users to voice their opinions, one-at-a-time, keeping their responses brief, to give everyone time to speak. The process lasts about 1-2 hours, making everyone kind of tired. The participants are paid their incentives, and the moderator drives home, wiping bitter tears from his eyes as he pulls into his driveway.
How could that possibly have been useful? he thinks to himself. What has my life come to?
Why Bullshit Happens
Naturally, one might ask: if it’s so terrible, so ineffective, and such a waste of the time and life force of everyone involved, why does it keep happening? Three reasons:
- Accountability. In order to justify spending on research to the men upstairs, companies need to be “sure” that the research they’re doing is “working”. The easiest way to ensure this is by coming up with a bullet-point list of Questions, then making sure that every participant gives an Answer. You hand the answers to your bosses, they nod approvingly: you’ve done your job.
- It’s easy. Making a list of Questions, asking people what they think point-blank, collecting a list of answers. It’s completely cut and dry, no error there. Of course, this process assumes that what the participants say is actually reliable–but isn’t that kind of absurd to assume that what someone says in a controlled environment to a person with a clipboard in front of six strangers actually reflects what they really think and believe?
- It works… kinda. We won’t go out and say that no valuable findings come out of this process, but we’ll also wager that it creates more problems than it solves. When all of your findings are based off of what people think they think, rather than how they act, it can set a misleading direction for the entire project that can cause more problems down the road. On top of that, this kind of research isn’t likely to uncover the sort of things you never even considered while putting your questions together–the kind of findings that inspire true innovation.
How to Stop That Shit
This, of course, is a topic that could fill many books (or blogs for that matter), but allow us to offer a few correctives to the worst offenses of user research as it’s commonly practiced:
- Native environments. Let’s just put it out there: focus groups are soulless. For the average person, there’s absolutely nothing natural or even pleasant about sitting in a meeting room, no matter how inoffensively decorated or well-deodorized. On to[ of that, it’s frustrating to sit while other people prattle on about some website when all you want to do is say your piece and collect your check. So whenever possible, stick to native environments: make the effort to speak with your users on their own terms and in their own technological ecosystem. (Naturally, this is most easily achieved remotely.)
- Get on your users’ time. Don’t force them to use your app or prototype when you want them to, but try to schedule your research around the time when they would naturally use it. If it doesn’t exist yet, you’re screwed. If it does, you can use Ethnio to recruit participants live from your web site.
- Talk to real people. Of course, the only users who are going to have passionate tasks at all are going to be the ones who would actually use your product. When you do your recruiting, you’ve got to screen your users to make sure you’re not just talking to someone who wants an incentive check. Recruiting companies happen to be terrible at this–does anyone actually know where their recruits are coming from? By far, the most reliable source of recruits are the people who are actually use your product, and if possible, you should talk to them while they’re using your product. (And if you’re wondering how to contact and screen these users, live, with a minimum of muss and fuss, why don’t you check out our completely free online recruiting tool?
- Ditch the script. A moderator script is useful as a checklist of things you know you need to find out about, but don’t be afraid of going off-script; again, you’ve got to let your participants talk about what they really care about, with regard to your product. So your moderator’s got to be flexible: if the user wants to go into a different part of the website than the script calls for, let them–you might not hit every script target with every user, but you’ll discover much more than you expected you would. If you really want to understand how people use your product, you have to let to show you how they really use your product, not just how you think they’ll be using it.
Of course, some of this applies more to research on finished, commercially available products than prototypes, and more to high-level user experience than usability; that said, this is just one tiny fragment of a micron of a speck of what needs to happen to reverse the flood of B.S. research that every day in labs and meeting rooms across the nation. Check back on the B|P blog often for more ranting about the state of UX, or, if you’re mad as hell and not going to take it anymore, hit us up on the website tip.
2,370 replies on “Stop Bullshit Research in Five Easy Steps”
Or we could have well-informed opinions of the internet based on the collective experience of it and our own sound judgment. Nah that’s crazy talk. Without prototyping we could have never verified that Flash splash pages suck or that frames were getting abused.
First of all, the world should stop using “Performance Measurement” to look at Academic society. It makes no sense to count “A, B, C” grade journal papers…but that is what happens now in the academic society. Everyone is evaluated by number of ranking journal articles, rather than impacts to the society.
Teaching evaluation is another none sense. A good Professor may not get good teaching feedback simply he/she is stringent and requiring students to learn, that does not mean he don’t have enough knowledge nor don’t teach well.
There should be employer and parents’ view on these things. But it never happens.. I forsee that “Performance Measurement” would continue ruin academic society and eventually hit greatly to the human society.