Remote is Better, pt. 1: Getting Clients’ Hands Dirty
In this inaugural entry in our B|P continuing series “Remote is Better”, we discuss how separating the moderator and the user eliminates the need for “two-way mirrors”–now you can get your clients into the driver’s seat with you (metaphorically). We show you how!
Lots of people think of remote research as a trade-off or a compromise–a cheap, quick alternative for when you can’t get users in the lab face-to-face. What often gets overlooked are the many, many qualitative benefits of testing remotely: if done properly, remote research can give you all kinds of data and insight that would be impossible to get otherwise. Of course, doing it properly means you need to know what you’re doing. Wouldn’t it be nice if there were people around with years of remote research experience, who were nice (or dumb) enough to give away all their best practices on their official blog?
Use the Right Tools
First things first: your clients need to be able to see and hear everything that you (the moderator) do. This can be as simple as having clients sit alongside the moderator and listen in on the phone conversation, passing notes to the moderator whenever they need to–but that’s kid’s stuff. There are lots of ways to use remote research tools in order to streamline this process. Our favorite, as we’ve mentioned before, is UserVue: it automatically records audio and video, and supports observer chat and video marking for later analysis, but there are tons of screensharing tools with basic built-in observer support: GoToMeeting, NetMeeting, and Adobe Connect. Audio can be shared using any two-line phone with a conference call featureâ€”the moderator can talk to the participant on one line, and clients can listen in on the other. Remember to Itâ€™s important that their line remains muted, so that participants donâ€™t hear them.)
Why let clients in on the game?
So, why even bother? Why is it worth releasing a bunch of potential monkeys into your airtight research methodology? Let Tony break it down for you:
Transparency. Letting clients in on the action can get them more invested in the outcome and make the findings easier to understand and present to them. Instead of just seeing a report at the end of the process, participating clients can draw on their own memories of the sessions to put the findings into richer context. You can win over the confidence of those many, many stakeholders who are unfamiliar with user research and are still on the fence about the soundness and validity of the methods (always a big issue for us UX types). When all the stuff that goes into the findings are clear, the value of the research becomes clear, but only, of course, insofar as you’re actually doing good research.
Hitting the right targets. Most research is based on goals and objectives laid out before any research happens. While this is perfect for targeted, specific types of studies (“Does this widget work?”), broader research projects (â€œHow do users respond to our site?â€) can benefit from a little flexibility. If a session gets bogged down in topics that aren’t all that important to your clients, then then clients can tell the moderator to move on. Even more useful is the notion of what we at B|P call emerging topics: clients can point out new and unforeseen issues they’re interested in as they come up during the session, allowing for a dynamic scope of research. Of course, itâ€™s up to the moderatorâ€™s best judgment to keep the scope manageableâ€”clients may go a little overboard and point out things way outside the domain of the research, and balance must be struck between what is useful to uncover and what the constraints of the research schedule can accommodate.
Expertise and assistance. Since practically any number of people can listen in on a research session, itâ€™s possible to have clients of all kinds of backgrounds and departments participating in the sessionâ€”designers, engineers, customer service reps, even CEOs can all listen in, supplying any necessary assistance to the moderator. Especially if the project takes specialist-type backgrounds to understand technical jargon, expert clients can help decode or explain obscure user comments. If whatâ€™s being tested is a prototype or beta model, the people involved in building the prototype can give the moderator a hand with any malfunctions that arise, and see for themselves if the prototype is working like it’s supposed to.
Note-taking and video annotation. Put em to work! Any notes clients care to take can provide great insight later on what aspects of the session clients are paying the most attention to. If the screen-sharing tool provides live video marking functionality (as UserVue does), clients can even stick notes right in the video file for the moderator to check out later.
Get Em Ready
The best way to make sure that clients don’t pull some crazy-monkey interference on your research is to make sure they know what to expect in advance.
To keep clients questions to the moderator at a minimum, give them a cheat-sheet they can refer to, including clear instructions about how to set up screen-sharing tools, how to chat or mark videos in UserVue, how to properly contact the moderator for requests, and so on.
Clients should also know what to expect about the constraints of the moderator during the session. Since the moderator will both be taking notes and speaking to the user, s/he may not respond right away to clientsâ€™ questions. Style is another constraint: the moderator must keep the tasks natural and the language neutral, so s/he might not act on client requests exactly as they would expect. For example, a client may ask, â€œAsk the user what he thinks of the navigation bar,â€ but the moderator might want to get at the issue in a more natural and less leading way with a series of questions: â€œWhere would you go to look for laptops?â€, â€œHow would you get back to the main page?â€ etc.
As mentioned above, clients may occasionally ask the moderator to address issues that lie way outside of the scope of the research or would be too much to tackle within the time limit. Encourage your clients to tell you what’s important or interesting to them, but also ask (nicely! Always nicely!) to stick generally to the research goals, and to defer to the moderatorâ€™s judgment for balancing emerging topics with the established research goals.
The Care and Feeding of Clients
If you prep them right, you won’t need to do a whole lot of management during the study–your clients can just chug along in the background, contacting you when necessary. But! Don’t assume that clients will follow the preparations to the letter–you’ve got to be prepared to actively manage clients during the study.
If any issue comes up, it’s usually about clients who make more demands than you can handle. It is important to promptly acknowledge their requests to reassure them that you’re listening, but you don’t have to necessarily promise to deliver on every request. To avoid coming off as rushed or irritated, use polite, concise, and unambiguously friendly language: â€œUnderstood. Iâ€™ll see what I can do! :)â€ You should also ask for clarification if the request is unclear: â€œNot sure I quite understand, can you explain? Thanks!â€ Ultimately, if there are too many requests to handle, the moderator should acknowledge it with the clients when the session ends: â€œSorry I couldnâ€™t get to all of your requestsâ€”it was important to leave time to achieve all of our study goals. Iâ€™ll definitely try to handle as many of your requests as I can, so feel free to keep sending them along!â€ You want to make clients feel comfortable talking to you, but they also need to understand that you’re just a mortal. (This does not apply if you are Thor.)
There may also be the problem of clients who donâ€™t seem to be paying attention to the session, or who don’t ask any questions at all. If you don’t hear a peep from clients for a long time (~10 minutes), itâ€™s good practice to ask them if thereâ€™s anything theyâ€™re curious about or want to have you probe further on. If clients respond no, then the moderator can simply continue; if clients are still mum, then the moderator can follow up with a brief assessment of the session to encourage them to respond: â€œThis user seems to have a lot of trouble logging in. Iâ€™ll try to find out more about that.â€
Finally, since you’re ultimately the one responsible for gathering the feedback, avoid soliciting advice or overrelying on clients for what to tell a participant (e.g. â€œWhat should I ask the user now?â€). Client requests are more of a tool for refining research objectives, not a playbook for the moderator. (The playbook, of course, consists of the documents based on the research objectives, like the moderator script.)
It can be good to hold very short debriefing sessions after each session, in order to get clientsâ€™ feedback on the moderating style and findings. When all testing is complete, clients may participate in brainstorming sessions, contributing any insights they may have gathered as outside observers. This is also a good opportunity for moderators to identify from clientsâ€™ interpretations any biases they may have about the research. For example, they may take a userâ€™s comment that â€œthe login process was okayâ€ to mean that there are no problems with the login process, whereas a trained moderator may have identified numerous behavioral issues that contradict that perception. Moderators can then be prepared to address this bias during the drafting of the report.
Finally, when the findings are presented, itâ€™s nice to take time to acknowledge the clientsâ€™ contributions to the project, specifically calling out instances where they helped. This is valuable for showing clients how they themselves benefited the project, thus increasing their investment and buy-in into the research itself.Greater buy-in will increase the chances of future research projects. The benefits of having those same active, interested, and research-experienced clients helping you out can only increase.
(Photo credit: “looli” @ flickr)
Tags: adobe connect, brainstorming, client management, client participation, emerging topics, goals and objectives, going off-script, gotomeeting, netmeeting, note-taking, presenting findings, prototype testing, Remote Research, screensharing, stakeholders, transparency, video annotation