Here’s part 2 of my conversation with LukeW. Please see Functioning Form for part 1.
Let me try to clarify by taking a step back. I see field research as a way to remove some of the tarnish that comes with more “traditional” market research like focus groups and surveys. The common perspective is that people in a focus group or survey won’t really tell you what they want or how they do things because often times -they can’t. They are a level removed from the actual activity and as a result may leave out key details or considerations they use the make decisions.
The classic example, and I can’t recall where I first read it, is the washing machine manufacturer that polls thousands of potential customers and asks them “what features do you want in a washing machine?” The responses they get back are: “just the basics”; “i just want a simple setting for colors and whites”; “nothing too fancy”; etc. So the company makes a bunch of no-frills, feature-lite machines and they don’t sell because when it actually comes time to buy a machine the same people that said they want “simple above all else” fall prey to feature-sheen. “Oooh but this one has more features…” I’m sure you’ve heard a similar tale or two.
So what we have here is people saying they do one thing then going out and doing something totally different. Field research should ideally be there at the point of the sale -in context- to enable the company to see what really happens.
Now let’s go back to my original question about digital context. In all the methods you described above -great list by the way!- we’re asking people to tell us what they’re doing rather than being there -in context- when they are doing it.
Maybe I’m picking nits here but I know there are lots of “hidden” subtleties within digital social systems that govern how people behave. There are contexts of when and where that alter behavior. As an example, during a home visit a buyer on eBay may tell you: “I leave positive feedback when I get an item in good condition.” Their actual behavior, however, differs. They may or may not leave feedback based on the type of seller (professional or amateur), how much feedback they have, how much feedback the seller has, the category they are buying from, their intentions for the item after they get it (resell, return), and so on.
I guess when I think of people that spend hours every day immersed in something like World of Warcraft I feel there’s more to their behavior and motivations in that digital space than they can explain in words. How can we be a fly on the wall within that digital context? Or is what I’m looking for already covered by the methods you outlined?
Any situation where you have someone telling you about their own behavior is going to include some amount of bias (and let’s pretend for the sake of discussion that our own bias isn’t an additional factor). In focus groups, those influences are hard to leverage (complex peer dynamics, sterile environment, closed-ended discussion), but in contextual research, we can try to take advantage of showing and telling, for instance. Having someone walk through their previous feedback log, and explain, is illustrative of patterns that person may not explicitly be telling us.
Q: Are you leaving feedback for the seller?
A: I leave positive feedback; it’s really important, I usually will look at the condition and decide based on that.
Q: [points to computer screen]Can you walk us through some examples of feedback?
A: Sure, umm, here’s one I left last week. The item was in pretty good condition, so, well, I only left 2 stars, because I didn’t get it right away. And this one here, I left 4 stars because the last time I bought from them it was great. Yeah.
Q: What’s that icon over there?
A: Oh, there’s a bunch of items awaiting feedback.
Q: How often does that happen?
A: Well, I’ve got about 35 in there, some of them let’s see oh yeah, some of them go back about 3 months, I guess.
Now, I’ve totally made that up to support my point so let’s not treat it as data, but as a likely scenario for a dialog in an interview. So much of the process involves triangulation – asking the question at different points in the interview, getting demonstrations as well as declarative statements as well as stories. When you come out of the session, you have to ask yourselves what you think that person’s approach to feedback is. And it lies somewhere in between, but it’s ultimately an interpretive answer.
We can measure people’s TV viewing habits, let’s say, with a Nielsen box, and we can ask them about their behavior. And history shows (like the washing machine example) a disconnect. People under-report their TV watching. It’s easy to think about why; TV is bad, it’s better to show yourself as someone who reads books and goes on hikes than it is as someone who watches a ton of TV. The insight comes out around the delta between the observed and the reported. Of course, not every gulf is an insight, or one that you can use.
And maybe to make it more simple, it’s easier to talk to people about what they did in the past, and why (i.e., leaving feedback) rather than their overall attitude. People make generalized statements but the actual examples contain a lot more subtlety. This of course leads to one reason that companies like to reject any form of customer research – because people can’t talk about what they will do or what they will want. My short answer to that is that the researcher is the one doing the interpretation; in all of this we aren’t simply collecting responses verbatim – we are dynamically choosing different questions and making inferences in order to build our own model of how people will behave.
That said, I had an interesting conversation the other night with Zachary Jean Paradis, a student at the Institute of Design. He described how he had tried to do an ethnography of World of Warcraft (a MMORPG), where presumably a lot more of the behavior to be understood is taking place inside a virtual world. He felt like his traditional tools of ethnographic research didn’t hold up, and he was wishing for another month to better refine his methods. Gaming is an interesting example (and one where I don’t have a great deal of personal experience) of the online-behavior-studied-offline that we’re talking about. I’ve heard that some researchers will videotape the faces and body language of people while they are playing; I imagine you could play those back along with the matching gameplay and have people reflect on what they think was happening at the time. You can see that technique in Gimme Shelter, where Mick Jagger is watching the footage from Altamont while they interview him. A UK company called Everyday Lives does this sort of user research exclusively, preferring to passively observe, and then only interview when there’s a video record of the event to be discussed. I think it’s an interesting tool, but I think we need an ever-expanding palette of methods to deal with new situations as they emerge, rather than dogmatically rely on a single approach.
Am I still dodging your question? Or are we any closer?
Maybe I’m dodging your answer! One thing you said, though, really resonated with me “getting demonstrations as well as declarative statements as well as stories.” Since we can’t actually be a fly on the wall within complex digital systems yet -and I say that because the tracking software and log analytics software I’ve used is still a ways off from being nuanced and effective enough to match what we can do it the real world- that’s how we need to understand context: through demonstrations, declarations, stories, and of course observation of what people are actually doing on screen. Personally, I do think as digital environments become even more immersive and complex, we’ll need additional methods.
That said, let’s jump into the other topic I wanted to bring up with you. Without getting into pure semantics, why do you think a lot of ethnographic or field research is being characterized as “design research”? Is it user experience design teams within large companies trying to own the research process/data? Is it an attempt to differentiate the type of customer insights a human-centric problem solving approach can uncover from the types of insights Marketing departments have traditionally owned -like customer segmentation? Or does this type of research intrinsically belong in the “design world”?
I agree that there’s more to behavior than can be explained in words. It’s up to us to look for the deeper meanings between the words – what is said, what isn’t said, and how it’s said. As far terminology goes, I agree with your suggestion that the label can be an attempt to distinguish the methodology and/or the results from market research, and the departments that do market research. I’m so frustrated by the chaos around methodological labels. I’m sure within organizations they can create a locally-relevant nomenclature (they can, I’m not sure they do) but once you leave the boundaries of their company (through any industry discussion, conference, or online group) they end up sowing confusion. The vendors, of course, who move between companies are even more guilty. There’s a desire to differentiate from the other providers by claiming some proprietary take on doing research: Context-Based Research, PhotoEthnography, Rapid Ethnography, etc. (some of those may be actual methods claimed by actual firms; others may just be me riffing). It’s tough to balance exploring the ideas and staying “on-message” isn’t it? I guess that’s why I don’t take kindly to the terminology wars; they seem to make it more confusing for people.
So this time you did dodge my question! I’ve consistently heard “design” added to “research” when describing the type of activities we’ve been discussing. Any thoughts on the inclusion of the design label? I know you find yourself in lots of designer-focused events like Design 2.0 in San Francisco, Core77, Overlap… is there really that strong a connection between design and ethnographic research? Why doesn’t this type of research feed business models more than mock-ups? From my experience, the designers eat this kind of data up, the business folks are slow to act on it. What’s you take on that?
One suggestion is that the term is historical. Bringing the tools of ethnographic research into product development was led by a few firms that were self-described design firms (like my old company, GVO) or that had ties to design (Doblin, with their connection to the Institute of Design). I would also say (and this is a gross generalization) that market research tends to focus on the evaluative and design research tends to focus on the generative. That’s more about the goals of the research sponsors than anything inherent in the methodologies (since ethnography is ethnography).
And I think those goals or orientations that differ by discipline will affect the gusto with which they eat this stuff up. In many cases, designers are faced with tangible goals. They are committed to acting, since the product has to be designed, and is going to launch. The information they gain from research can help them solve a near-term problem (i.e., what is the organizational framework for a navigation through a space?). Even though my strategic recommendations can be as tangible as my tactical ones, they ask for actions that are much slower (i.e., launch a newsletter that addresses the transparency concerns of customers), more tangled up in organizational (same example) and resource issues (same example), and with many degrees of freedom to create a good solution (and as non-designers, that can be paralyzing).