5. Kerry McAleer-Forte of Sears Holdings

Today’s guest is Kerry McAleer-Forte, the Director of User Experience Research for Sears Holdings. We discuss how researchers need to think like storytellers, getting at the underlying need behind a research request, and the risk of using research to make recommendations.

Tools are important. It’s important to choose the right one, but at the end of the day, they also won’t replace that thing inside your skull, which is your most important tool, which is being able to observe and listen and pull the important story out for your user. – Kerry McAleer-Forte

Show Links

Follow Dollars to Donuts on Twitter (and Stitcher) and take a moment to leave a review on iTunes.

Transcript

Steve Portigal: Well, thanks for joining us today Kerry.

Kerry McAleer-Forte: Absolutely. Glad to be here.

Steve: So, let’s start, as we do, with a bit of an introduction. Maybe you could describe the organization that you’re at and the role that you have there.

Kerry: Sure. I am the Director of User Experience Research for Sears Holdings, and I have a team of seven full-time researchers and we specifically focus on experience research. By many accounts, it’s a pretty straightforward organization where UCD and UX people are concerned. We sit directly within a customer experience organization that has user experience architects, and designers, and copywriters, and product managers, and product and UX work hand in hand also with analytics, and we do a really wide range of methodologies to support answering the questions that they have around design and around the experience in general.

Steve: You mentioned Sears Holdings–how do we know Sears? What do we know Sears Holdings as, as consumers, or as consumers in North America at least?

Kerry: The two main brands that everyone would be familiar with would be Sears and Kmart since we are under one company. So, the websites, the mobile applications and devices, and then obviously the physical stores are all things that we spend time researching.

Steve: So, the websites for both brands, the stores for both brands, and then the mobile apps or other kinds of tools for both brands as well?

Kerry: Mobile and tablet. Right, because of where our team sits, the UX and the product management does have a particular digital focus. So, most of our time is spent focusing on the website, mobile phones, and tablets. However, Sears has been in the integrated retail arena for a long time now, so we definitely take an approach wherever we can.

We’re looking at the holistic end-to-end experience, so somebody might be at home on their tablet, they’re on the iPad watching television, they see something that’s interesting, and then the next day they’re at work and they pull it up and they’re like “Mm, that’s pretty interesting.” Then, on the weekend, they visit a store to go check out the refrigerator, open the door, and then pulling it up on their phone to check out reviews.

So, again, most of our work is done with a digital focus, but it’s definitely running across all of the different channels.

Steve: So, what’s the geographic emphasis for your team? Where are you looking at consumers?

Kerry: The entire US–from coast to coast there are stores, and online is obviously omnipresent, and the demographic of who our customers are goes back for 125 years. So, we’re interested in understanding all of them.

Steve: And we quickly went to talking about consumers, which makes sense since your business is retail, but is that the key focus for you? Looking at–I don’t know what your term is–shoppers, consumers?

Kerry: I think we interchangeably use shopper/user, but specifically member. Sears is a very member-focused organization, so we’re looking not just at a terminal transaction of coming and buying a product one time, but people have had multi-generational relationships with the brand, so we’re focused on understanding how that relationship thrives over time and what its different needs are over time.

Steve: Does your team think about, or is tasked with, looking at, say, the work of someone at a Sears or a Kmart store?

Kerry: So, say, from the associate side? Yeah. Supporting integrated retail requires a lot of digital support in and of itself. Some of the work we do is focused on apps and sites that associates in the stores or in other parts of the company are using to help run the company. But by far the most common type of research that we would do would be around understanding what it’s like to shop at Sears.

Steve: That makes sense in retrospect. I think I have the experience sometimes of talking to people where I know their product or service as a user of it, only to find out that there’s a million things going on behind the scenes that they work on. So, I guess that’s behind my question, kind of where your emphasis is. But really you’re looking at the members and their experiences.

Kerry: Yeah.

Steve: Also at the beginning, you described a little bit about your team. I think you said there was seven of you? Can you say a little more about kind of what the skill sets or roles that you have in that team?

Kerry: The skill sets are really wide-ranging. Like a lot of people, UX people have come to UX research through many different paths, whether it be psychology or rhetoric or something completely unrelated. I’d say that the vast majority of what we do is evaluation of a specific design. So, that might be a current website, it might be a current phone app. Then we also do some sort of generative work, we do some field work.

Again, when we get into the integrative retail point of view, we really need to understand what it’s like to transition from one device to the other, and how those needs change, and where are the gaps between them. So, the team is poised to change very quickly from different types of tools to meet the different needs of every project that we’re doing.

Steve: I like how you characterize the world of UX and UX research in general. You didn’t use the word “generalist,” but there’s something there about how we all do a lot of different kinds of things.

Kerry: Yeah, if you tried to write the job description–and I have–it gets very, very long and it seems very intimidating because, frankly, to be a great experience researcher, you need to understand experience design and you also have to have an understanding of how this design is relating to a business that needs to support itself and needs to be successful. So, there are a lot of multi-faceted demands put on a researcher, in addition to their immediate job.

Steve: And I like how you’re using the words “experience researcher”–the terms that we use are so fraught, and I don’t mean to pick on you in any way, I’m not asking you to defend that term, but I wonder if you can explain why that’s the term you’re using.

Kerry: Well, to use a technical term, this is a ginormous company and an enormous organization within the company, and there are so many different types of research out in the universe, that most people are much more familiar with other types, so as a marketing research or analytics, and these things are always very much focused on numbers. So, I do like to stay away from the terms “quantitative” and “qualitative.” Everybody likes a number. Everybody would like to, from a business perspective, have that great confidence you get with being able to say “See, this is 5.2 and not 7.8.”

When you’re talking about experience, you start to throw the psychosocial layers into it and it gets a little bit more difficult to attach a number to. So, by referring to it as experience research, I think that people can relate to having an experience and they relate to it as a three-dimensional thing and it’s involving personalities, it’s moving through time and space, and it’s not just attached to an analysis of ticking numbers at the end. Does that make sense?

Steve: Yeah. And is that a term that you’ve advocated for internally?

Kerry: It’s one that we often use. I think that within our business, the term “usability” has become a little confining, frankly. I think that it has almost some very limited definitions. I mean, our focus at the end of the day is “What’s your question?” and our job as researchers is to figure out what’s the best methodology and the best tool to pair together to answer that question. So frankly, whether it’s quantitative/qualitative, or whether it’s a usability or HCI focus or storytelling, whatever my particular flavor is, I don’t care so much, as long as I do a great job answering your question within all of the constraints I have. If I’ve got three days as opposed to three weeks, my choices are going to be really different, so that’s where I like to keep our focus.

A lot of times people will come to us with requests for assistance and they say “I want to do this type of a study on this to show this,” as opposed to starting from “Tell me what’s the question you’re trying to answer? What is it that’s gone wrong? What is it that you’re noticing that makes you feel like you need some more insight?” You don’t have to worry about what techniques, what tools are out there because people–it’s not their job to be a researcher and know our whole tool box. So, we’re going to put together the best constellation of method and tool to get you what you need. Sometimes they’re really knowledgeable and they know exactly where we’re going to go with that, and sometimes we use a little methodology jazz and we put things together in different ways and hook things up in a way that can get at some of the nuances better than things they were familiar with to begin with.

Steve: Methodology jazz is a great phrase. I love what you’re saying and I’ve been talking a lot about my own terminology, I guess–the business question, the research question, and the research method, and that they’re all related.

The business question is “What does the business want to do?” and the research question is “What do we need to learn in order to help inform or guide that business action?” and the research method is, of course, “How are we going to go about it?” and that there’s a relationship between the three. But I find, like you, that people come in with one of those three and that there’s some work to be done to kind of develop all three and connect them.

Do you have a theory or any thoughts about why do people approach and say “I want to do these activities” as opposed to “I want to answer this question”? Why is that a starting point, do you think, for some people?

Kerry: That’s a great question. I think that in the grand scheme, there are so many different types of research because there are so many different types of questions and it would be unusual for there to be one team that did them all, that did formal marketing research, and analytics, and experience research, and whatever else we’ll all come up with next. So, I think that people–you go with what you know. “I’ve been around, I’ve heard some marketing studies before, so I guess that’s what I need. I need to find out some marketing information,” or they’re familiar with web analytics, so they want to know click rates and things like that.

So, I think that where experience research comes in, it’s often very much a matter of educating them how to work with us, letting them know what our capabilities are and then really acting as their consultant to guide them through–”Focus on your business aspects,” or “Focus on the design questions that you have and we’re going to bring our tools to the table.”

Steve: That’s a very wise approach and I wonder if our field has kind of done ourselves a disservice, especially in our relationships with the media. There’s stories that we think we can get attention for that we like to tell and the media likes to repeat, that “We went and we watched people sniff perfume bottles,” whatever sort of weird activity. That seems to make for a good business article, even though we’ve been reading those articles for a couple of decades now.

People are coming to us and saying “Hey, we want to do a blank,” and I wonder if we’re the ones that are putting those stories out there. It’s easy for me to criticize other people that don’t know and then I have to educate them, and I can feel myself being a little judgmental sometimes of people that don’t know how to make a proper request for me. But as I hear you talk, I think “Well, maybe that’s collectively our fault for promoting those stories that seem to have a little bit of sizzle to them.”

Kerry: I don’t know, I’m going to let you off the hook. I think that they didn’t come to you because they heard about a tool. They came to you because they heard something that sounded promising to deliver them some results. “So, whatever it is you call that thingamajig over there, yeah, let’s use one of those.” I mean, I’m sure people have come to you and said “My business is down, so let’s do some eye tracking,” and you take a deep breath and you’re like “All right, well let’s talk about what’s going on with your business and then we’ll get to the tools last.”

But eye tracking is the most fantastic sales tool ever invented and its actual applications, frankly, I think are really limited. What it does, it does well. So whatever it was that got them in the door, that’s fine. “I’m glad you’re here, I’m glad you’re interested.” The most important thing is they’re wanting some feedback, they’re realizing that “I can’t and I shouldn’t make decisions in a vacuum about how to run my business, about what I think will please and delight my customers.”

Steve: That’s a very empathic way of thinking about providing help. A lot of what we do is a form of service to people that have some other type of question or business need, so I think to approach it the way you are is good–and I’m happy to be off the hook.

Kerry: We spend a lot of time on our team thinking about our own company as our users and that really helps us make better choices about the type of research that we do. Sure, you would love weeks to put together a shining report that would get an A+ if you handed it in some place. Sometimes I say if we take that long, we have, by definition, made ourselves useless to the people that have to be running as fast as they can think. So, you have to figure out what the most important headlines are, get it to them, be okay with a B-.

You know, we’re all overachievers on the team and that’s hard because you know you can get it a little clearer, you know you can get it a little shinier, smoother, nicer. You might have to leave out a couple of things, but that’s the tradeoff. “What are the needs of our users?” Our users need it fast, they need to get to the heart of the issues and we’re going to have to bank on the fact that we have a long-term relationship with them, that even if we don’t get it this first time, this first report, we’re going to keep engaging with you, and we have that relationship and we build the rapport between the engagements where my team isn’t embedded within every single one, but we assign domains to people, so they develop subject matter expertise, they develop relationships over time, and I think that at the end of an experience study, what do you know? We’ve made some observations, we have derived some feedback for them from it. Our approach is we do not make recommendations, and that’s where we differ from a lot of researchers and research organizations.

In my early days with Sears, I had somebody come to me and they showed me a report that had been done during a previous time, and somebody had dutifully executed a pretty straightforward usability test, they had observed users doing things, and then they said “Here’s your problem as evidenced by these three observations, therefore here’s the recommendation,” and the person was struggling because there was clearly no causal link between the observations and the recommendations that were being made. I could kind of see where they were coming from, but because it was in a report in a document and it has a great sense of authority once it’s wrapped up in that document, somebody was very dutifully saying “Well, research says this is how we’re going to solve our problem,” and you start marching down that road.

So, we come at it from the point of view of “We need to let you know what we’re seeing, we need to let you know what we think the implications of it are, why these are important things, how they relate to other parts of the experience that we’re also observing, but the actual solutioning is something that we’re going to do with you after this report.”

Steve: So, what I hear you talking about is taking kind of a facilitating role as opposed to an instructive role. So, rather than saying “You should do X,” you’re saying “Hey, we’ve learned these things and we want to work with you to help determine how to act on it.”

Kerry: Exactly, right. When we’re working with newer teams, an analogy they’ll use a lot is “Think of us as your radiologist. A patient comes to us, we’re going to study them, and then we’re going to come away, we’re going to tell you where it’s broken, the nature of the break, how bad it is and then we’re going to work with the surgeons–the plastic surgeons, the orthopedists, those are the specialists that are going to come in that are going to be much more familiar with maybe things the patient has already experienced or maybe things that are more realistic within the whole context of running the hospital.” So, we’re coming in to help identify and then we’re going to then consult when it comes time for the solution, but not actually be the ones to deliver it.

Steve: So, in this analogy, do your users, your internal clients, do they have access to surgeons?

Kerry: Yes, absolutely. So, many times the surgeons are the ones that have come to us requesting that the patient be studied. We’re working with user experience teams made up of the user experience architects, and the designers, and the front-end developers, and the copywriters, and then that team is working closely with a product management team that’s very focused on understanding the business analytics and the business goals behind that.

Steve: Can you describe, in as generic a form as is appropriate of course, some kind of narrative about a project that your team was involved in?

Kerry: Sure. Our typical choreography is that for our major teams and initiatives, we’ll have a researcher assigned to that domain. So, one domain, for instance, might be top of funnel, which for us is from the home page to the cart, or it could be bottom of funnel from cart to checkout. Our structure is set up that way because it mirrors the way other teams are set up. So, again, we’re not embedded per se–I wish we had those numbers–but you get a general domain. So, you meet with your domain teams and you start to formulate. I’d like to call it a road map, but being in retail, you have to be very, very flexible, so we look at a forecast of what’s coming up. There’s a two way communication between our team and the other teams. Teams come to us and they say “Here’s a project we’re working on and we know that we would like input beforehand, or input during, or doing some sort of prototyping.”

The other form of communication is as the researcher develops experience within the domain, they have the advantage of not being as in the weeds as the other people are about the details and the requirements and the system operation, and it allows them to have those fresh eyes to say “Hey guys, you know what? Those are important things but we’re also going to study this,” and so they will propose additional studies many times that are needed to really get a great understanding of whether or not something is successful or the domain is working well.

And then as we get closer to, say, a particular project, we’ll work with the team on starting, first and foremost, again, going back to your question, “What are the research questions that we’re going for?” and that’s really, really important because, by definition, we have to move extremely fast because they do. So, identifying and articulating “Here’s what we’re answering with this study,” it therefore means that we are not answering all those other great questions, they’re not listed right here. Then we do the algorithm from “Here’s the question. What measures can we gather to answer this question? Okay, what tool is the best tool to carry out those particular measures?” and then the researcher comes up with a test plan, reviews it–this is all very tidy assuming we have a tidy project–and they run the study.

I think that one of the things that has surprised me over the past five years is how much our focus has shifted from using our in-person usability lab to using remote tools. We still use our lab, it’s still a great asset to have to be able to do these in-person studies, but remote tools are where it’s at for our team. They allow us to do an incredibly fast turnaround in terms of getting out to finding the right kinds of users, getting the stimulus in front of somebody, whether that’s a live site or a prototype–it could be a low fidelity thing that we get out there very quickly. Then we’re focused on getting that report back to the teams, again, extremely fast.

So, from the day of the launch of a remote test, we are able to, after we’ve set up the test, to get it out there, get all of our tests completed and the videos back–the longest within a day, usually much faster than that–and we’re going to have the report back to the team, fully annotated and with video clips as well. We’re going to have that to them anywhere from two to four days after that. I think our land speed record was four hours from beginning to end. We just needed to study the interaction of one particular toggle, so we formulated the test, ran the test, reported it and got it back to the team within four hours.

Obviously there are going to be studies that take longer. Field studies, for instance, or diary studies–something that unfolds over a number of weeks. Then once it’s reported–we always, always, always have a report–we meet with the immediate team and we present the report and then that’s when the discussion starts. “All right, here’s what we know. Now what are we doing about it?” Then we become consultants and cohorts with the design and the project management teams to talk through the options.

In addition to presenting directly to the teams and working with them, we maintain an online wiki where we publish our research. So, while teams are focused very much on the things that they’re doing, we know that they can also learn from things that have been done before, and being aware of–again, things happen in other points of the experience, so we have an online wiki complete with all of the video that people can go in and search themselves and pull up things, and it’s really important for scalability because, even with a team of seven that’s great, you can’t begin to meet the research needs for every team for every project that’s out there. So, we get a lot of support leveraging that.

Steve: What are the attributes of less tidiness?

Kerry: Less tidiness? I think a good experience study is predicated on being able to get the participant to play along in your land of make believe with skewing them as little as possible. We know just by participating in research, they will, of course, be somewhat altered in their behavior, but I think that some things are easier to get them to fake than others.

So, if we say “Pretend you feel like shopping today,” that’s not as big of a stretch as saying “Pretend like you really want to figure out how to use a coupon today,” because the first just presupposes that you’re somebody who shops, which is pretty common behavior and something that you can actually screen for rather easily, and therefore write a scenario to, write a task setup fairly easily. The more specific or the more narrow the topic when you get to things like, say, a coupon, you’re really telegraphing a lot of information to the user, first of all, about what’s going to be going on, what you want them to look for, and these are just the types of situations that it’s hard to set up a study so that you can be pretty confident that “I’m pretty sure if I have them do this, they’re going to end up doing what I hope they’re going to do,” because you can’t just say “What do you think of this?”

But sometimes it’s tricky, and sometimes at the end of the day what the team wants to know is “Which is better, A or B?” and a small sample test isn’t a great way to assess that. Like I say, “We don’t pick winners. We’re going to let you know where the strengths and weaknesses are in each of these, we’re going to let you know what people did and how people reacted to them, but I know what you want to know, is you want me to point to one and say ‘Do that one, go,’” and so sometimes the untidiness, the messiness comes from having to squirm a little bit and say “Sorry, love you, mean it, but we’re not doing that. We’re going to answer this for you instead.”

Steve: I love to hear that given the timeline that you are working on, which is quite amazing, there’s a lot of investment in your limited timeframe in aligning on the problem and the approach, and I think you said something about “We’re going to answer this, we’re not going to be able to answer that right now, and in order to answer this, we’re going to do this other thing as well.”

So, you’re really focusing on having that conversation to figure out together “This is what we’re going to do in order to address your question,” and that seems like that’s a hard space for some teams, that they’re trying to get away from sort of taking dictated requests. But you’re really bringing the expertise and working together with them to figure out, like you said before, “What are your questions and how are we going to get there?”

Kerry: Yeah, and we can do that because we have an ongoing relationship as an internal in-house team. If you have an organizational model where, say for instance, you know you’re only going to get one study, then you’re going to be really, really starting to answer all of your questions and want to cram as much in there as you can.

Whereas if you have an ongoing relationship and hopefully, ideally, you’re not going to get just one study, you’re at the very least going to have sort of an ad hoc relationship with the research team subsequent to your study, there’s not as much pressure on that, and I think that people are a little more open to some mentoring about how to approach breaking down what their question needs are because teams know what they need to know. They’re usually right on. They usually just have one hundred questions and we need to narrow you down to ten per study and then go from there.

Steve: Anything else about this relationship that you have with these teams? Because this seems to be really key to how you’re being successful and what you’re trying to accomplish. Are there other things that are going on in that relationship maybe outside–you’ve mentioned the wiki and you’ve mentioned sort of what the choreography of what the project is. Anything else?

Kerry: I think that some of the greatest success relies on having a great communication flow with our teams, but I’ve been really, really passionate that we remain a separate organization, that we remain a separate team, because as much as we want the company to succeed, we’re Switzerland and you look at our flag and you see the face of the member on it. So, we really need to have kind of this dual relationship of having great open communication and yet I need to be just as excited to deliver terrible news to you about your study as I am to deliver good news.

I also want our teams to succeed in being able to explore their ideas, but when somebody comes to me and says “Hey, I want to run a study because I need some ammunition for fill in the blank,” a flag goes up and “Well, okay, I’m not going to worry about what you’re doing with this. What I want to know is what is your question?”

Everybody has heard that Switzerland reference a million times around here because they can get the fact that to have research “in your pocket” does nobody any good. We have no interest and it’s very dangerous. We run the risk of being seen as biased if we’re only attending to the needs of one group or one particular kind of initiative.

Steve: What you’re describing, the Switzerland example, reminds me of something that Michael Kronthal, who up until recently was in a similar role to yours at Yahoo, is not anymore, but I saw him give a talk and he had this really cool model which I’m not going to be able to replicate properly and I haven’t been able to find online, so maybe as a result of saying this someone will dig it up for me, but he described three or four different models as to what the relationship is between the researcher–the, in your parlance, member and the user.

One of them was sherpa, where the researcher brings the internal team kind of to that world of the member. He had these great graphics that kind of showed different organizations that can work in different ways, and it wasn’t one was better than the other. I think that Switzerland aspect is in there as well. We’ll have to find that. I haven’t been able to. I’ve been looking for it for awhile because it was so concise, and when you saw it, it was like “Yeah, we do all those things and I’ve worked with groups that are one more than the other.”

Kerry: Yeah, that’d be interesting.

Steve: But I like the Switzerland thing because I think that speaks to not only sort of what the process is but much more what the mindset is, that you are there to kind of bridge and represent and enable and connect these different constituencies.

Kerry: A good test doesn’t always mean good news, but you’re definitely going to benefit from getting your bad news because there’s no escaping it if you let it out without changing it.

Steve: Can you talk a little about the history of experience research within your organization?

Kerry: I think it’s a really exciting time to be a researcher within retail in particular because there’s so much change going on. I need a word that’s bigger than “change.” Five years ago when I first joined, there was a team of one and I became the team of one, and what you had were a lot of people that, like a lot of other organizations, were used to being hybrids, so I’m going to be a UXA until I need something tested and then I’m going to put on a researcher hat and then I’m going to do that.

There’s definitely a benefit of being able to knock off a test on your own and know how to do that really well. But there was a definite craving in the organization for more feedback. We tend to use the word “feedback” rather than “data” or “research” because I think that it allows you to be more flexible in what you actually mean by that. Feedback can be something you’re getting from ad hoc research or it could be something from a more formalized study.

But what was clear–obviously, how much can one person do? So, it all comes down to scale and the appetite to invest. So, no big news to anybody listening to this podcast, it can be difficult to convince people to invest in research or invest in experience research in particular. They can feel a little more comfy investing in marketing, say, or investing in analytics because it’s very literal and it feels very knowable in many ways. In order to be able to feed the starving kids at the door, we needed to find a way to start to prove the value and to prove some sense of scalability, that it’s not something that’s going to take this massive investment, it’s not going to slow down timetables, that there’s a way that you can have a really practical, savvy, lightweight research practice while still delivering great results and still be thinking about great design experiences.

In the earlier years it was very focused on “How light can we go? How short can we go? How fast can we go?” because I think you can go too fast if you skip especially those early important steps of focusing on “Okay, let’s clarify, what are we doing? What are we answering? Okay, go.” You do tend to get your favorite methods and your favorite routine, and sometimes at the end if you cut the wrong corners, there’s a “gotcha” at the end of “Oh, that’s not what you actually wanted to answer. You wanted to answer something else.”

So, a lot of my focus on remote tools and lightweight methodologies really came from needing to attend to the needs of our users who are our internal teams. This type of information and feedback is such an easy sell. Once people get it, once they experience it once, typically people are like “Well, I would never want to do a project without that again” because it’s so enlightening, and it’s enlightening in a way that’s, I think, even more approachable and even more knowable than a number at the end of the day. If I watch somebody doing something, if I’m watching a twenty-second clip, cognitively I’m getting so much more information out of that than looking at a chart with a line. Charts with lines are really important but they’re only part of the story, so we’re really able to deliver the other side of that.

As demand grew and as confidence in the leaders grew, some other very important leaders came into the organization and they were pivotal and coming in from organizations that didn’t have the long, long legacy of any company with a longstanding legacy. It’s a double-edged sword–you’ve got great culture and traditions, and sometimes there are routines that are very difficult or attitudes that are very difficult to change. So, leadership coming in from other more new-world organizations really brought the point of view of, “Well, of course, it’s just table stakes. Of course you have to understand the experience,” and so the team started to grow–the core team, meaning the team that’s attending to the universal websites, the universal applications, things like that.

Then we started to develop such great relationships with some of our internal businesses, and to us, a business, an internal business for instance, would be hard-lines businesses or software, or soft-lines, clothing. If we’re researching what’s a great shopping cart, it’s not necessarily speaking to all the most important needs of somebody who’s, say, shopping for clothing, so we started establishing relationships with some of our other business units to focus on other types of things. That allowed our team to grow even more–people who could specialize not just in an area of the funnel but in a particular point of view or type of shopping.

The other part that’s been a really interesting space for us to grow into has been the non-shopping sites. So, Sears Home Services, Sears Parts Direct. If you’re thinking about it as a lifelong relationship with the brand, where I’m getting to know a brand, I’m making purchases, and then I’m maintaining and taking care of not just these things but my entire home over a lifespan, there’s some really great opportunities that come up there. Once we started to connect and establish relationships with those teams, we were allowed to grow even more and look forward to continuing to grow because I think that the spaces where we need to answer questions about experience are just ballooning with all of the different social and technological advances.

I think that we’re on this really exciting doorway right now into the land of the internet of everything, where what happens when everything you look at is connected and can speak to each other. I think that’s, for researchers, just fascinating and I think that it’s also going to be really interesting to find out what we have to let go of and what’s not going to apply anymore, and what new lessons and paradigms and principles are going to come from the new ways of being.

Steve: Your team has been growing and you’re looking forward to this imminent growth as the topics explode–who and what makes for a good addition to your team?

Kerry: A type of researcher?

Steve: A type of person or a type of researcher. What is it you’re looking at?

Kerry: That’s a great question. I’ve been really lucky to find some really special researchers on my team, and I think that what they all excel at is regardless of their beginnings, regardless of what their undergrad or their graduate degree is, they really understand–people are going to click off this podcast–they understand why the research question is important.

You can do methodology jazz, but you have to understand your scales. You have to be able to do the basics. So, as long as you have a really clear understanding of the academic principles behind qualitative research, research of any kind, scientific theory, the scientific method, then I think that then frees you to explore and to play around. “Oh, there’s a new tool out, there’s a new remote that’s going to solve the world and run your business-kind-of-tool out. Does it work, or doesn’t it?”

I think that you have the foundation of the classical theory, the foundation of the academic, and then the curiosity to really play around. Not that we don’t take it seriously, but I think that if you want a template–a template should only ever be your springboard–so, to think that we would do the same method, the same tool forever or for most, to me, is missing out on one of the most important parts of what we do, is to pay attention to the nuances and pay attention to the direction, to the trend. Because if you’re expecting everything to fit into this box and you’re not open to spotting “Oh, that seems to be heading over there, I didn’t expect that. We need to follow that, that’s important.” So, you have to be okay with that uncertainty and chase things down as they emerge.

Steve: Is that what you mean by “play”? Is that uncertainty and looking for those moments and following them up?

Kerry: Yeah. I think that there are many different personality types that can make a great researcher, but for me, somebody that needs things very checklist-oriented, “I’m going to do this, and this, and this, and this, and therefore I will have a great study”–checklists are great, but they’re very two-dimensional. It’s not just thinking like a computer, it’s not just thinking like somebody who’s going to tick off boxes. I mean, you’re really thinking as a storyteller. You’re looking at the entire stage, you’re looking at the entire scene and understanding “I’m watching this thing, but this thing is happening in a much bigger context.”

That possibly only makes sense to me. A lot of what we do is about overlay structure and limits and very definable things on experience, and experience–it’s messy, it’s emerging, it’s fluid, it’s dynamic, so I think that you need to be comfortable of both worlds, of knowing the importance of the constraints, knowing the importance of articulating the structure and not suffocating it in the process.

Steve: I almost feel like Gollum, kind of holding onto this idea that instead of Precious, it’s really about–it’s creative. I think “Oh, researchers, creative, researchers, creative,” and because research informs people that make things and that think divergently and often our work is to kind of facilitate that and sometimes I feel like “Oh, we don’t get credit for being creative,” but I haven’t necessarily articulated that.

Certainly you’ve given a really just lovely kind of call to creative arms for what that play means and what the thinking is that researchers have to have in the work that we’re trying to do now.

Kerry: Yeah, I think you have to be–if you aren’t creative, if you’re somebody who really does need that unchanging rigid structure, then my guess is that you’re going to have some pretty limited success in what you do just as a business person would.

It would be “If business were that easy and that predictable, everybody would do it well, right?” But it’s not. It’s like they say in medicine, “You’re still practicing medicine.” You don’t quite ever know exactly what you’re doing but you work through it.

Steve: What was your background? How did you get to be great in the way that you are great?

Kerry: I, like many of the researchers on the team, have kind of a wind-y background. I started off in undergrad in theater and I was very interested in theater history, and performance, and design, and ended up working in film production for quite a few years after that and had a blast. It’s like going off with this very gentle non-violent army and going to war, and you go to war for a couple of weeks and then you have something in the can and then you’re done and you move on. That was very interesting, but most of the work I did was in advertising and I think that there was a part of me that felt like I wanted to branch out a little bit more because advertising is, by design, pretty predictable a lot of the time.

I ended up connecting with an arts organization here in Chicago that I ran the programming for and it was really interesting because it was arts education for kids but it had a jobs training aspect to it. So, it was paying kids minimum wage to be mentored by a professional artist and in all of the different kinds of arts. So, what a blast, but some really, really interesting things caught my attention as I was there over the years.

We had an inclusion program where kids with physical and cognitive disabilities were mainstreamed with all of the other kids and we would start most of our programming–it first took place in the summer, and in the fall we would start to get calls from teachers saying “What did you do with this kid? Johnny used to be limited and had trouble relating and would be very antisocial and now he’s flirting.” So, we would hear these great anecdotal stories about kids that, after having gone through the two-fold arts experience and also this social experience, just would have enormous personality and brain development, cognitive development, and that’s kind of what drew me back to grad school.

My graduate program was an interdisciplinary mix of cognitive science and instructional design and social context, which are my three favorite flavors of soup. I got to grad school never having heard of Don Norman and then read Don Norman and said “That’s it.” So, I became hooked on research and thinking about experience since then and have worked in a number of different sectors of digital experience research since then.

Steve: What kind of soup are you serving in ten years from now?

Kerry: Oh, ten years… You know, that’s funny–I think my brain is wired to really live very much in the now. I’m often not the most accurate future-thinker. I can spot immediate things but I do remember somebody first described email to me, I kind of scrunched up my face and said “What would you do with that?” So, maybe not the most visionary.

All I can think is that the pace of technology becoming invisible to us is just astonishing. Whether it’s physically invisible to us, we just can’t see it, it’s embedded, it’s the internet of everything, it’s my refrigerator talking to the grocery store and things just show up, or whether it’s the cognitive invisibility where you’re just “Of course it’s not unusual to send an email now, of course it’s not unusual to think that I can show somebody my computer screen on the other side of the globe now.”

I think for me in ten years, I’ll just continue to be fascinated in watching how people’s perceptions and behaviors change according to how technology is really transforming the world around us.

Steve: I love that, that was great, and you didn’t mention soup, so that was my bad in trying to belabor your lovely metaphor.

Kerry: That’s okay. I love cooking, we’ll talk about soup after this.

Steve: What didn’t we talk about in this conversation that you think we should cover?

Kerry: Yeah, ask a researcher to ask you a question and you’ll never stop. “Do you have any questions?” That’s a dangerous question.

First of all, I’m really excited by this series. I think that very often researchers tend to be off in a room watching video and analyzing things, so it’s exciting to hear more about the practices and just the people that are in this space around the country, so thanks to you for that. So, after you’ve been listening to us yammer on for awhile, I’m interested right away–what are the things that are surprising you of either trends or topics or points of view you didn’t expect?

Steve: Wow, turning it around. Yeah, you’re right, researchers are dangerous with each other.

Yeah, I mean, I’m in this interesting position, as I’ve talked to a number of people now and I think this is what happens any time you do research, is you can feel that there’s something to synthesize. There’s sort of a “spider sense” that I think that I get when I have parallel conversations with people in related roles, and sometimes I try to keep myself in the living in the data. “Don’t go there yet,”–of course, when you have the luxury of that time.

So, I get these little signals here and there, like “Oh, that’s a thing that someone else said,” “Oh, that’s very kind of different,” and it’s funny too because I’m trying to be present in our conversation, but yet I keep having these little moments where my eyebrows are like “Oh, wow, that’s the pull quote, that’s this,” and now you’re asking me kind of at the end to go back to that, and I’m like “I’m right in this part of the conversation thinking about soup and the future and your phrase ‘the internet of everything.’”

I’m just going to defer. I don’t know if I can answer that question right now, because that’s the question I was going to ask you too, is like “Where do you see some of your unique things?” You said early on that the balance is really on the evaluative versus the generative, but I feel like–and I’ll just expose a bias–I feel like you talk about evaluative work with the soul of a generative researcher.

Kerry: Oh yeah, for sure.

Steve: Because I feel like evaluative research can get a bad rap, that it is kind of convergent, it’s not nuance, it’s not rich. It may be actionable but it’s sort of a subset of the problem. That’s a bias that I have in general and I’m sort of hearing my bias come out by the way that I hear you talk because, again, the deep focus on building rapport and framing the problem, I love that and that’s sort of the research working at its best with teams.

So, that is a really important theme that I’ve heard you articulate. And I’m not saying that that’s a gap with anybody else, but you just said it kind of in an interesting and, I think, unique way. There’s probably others but I think I’m kind of rambling here, so I might stop.

Kerry: Well, I think it’s interesting for you to say you’re calling yourself out on your biases and turning your nose up at certain types of things. I think earlier you heard me throw some shade at eye tracking and the difficult relationship I have with that, and I think that marketing people roll their eyes when you talk about being able to understand anything with five people in an experience study, and analytics people.

So, I think that it is really easy to be dismissive of other genres really, and I think that while I definitely am the worst possible candidate to be a data analyst, my best scenario is to link pinkys with them and say “Thank God there are people that like to do that, because we need that too.” You can’t just watch five people and know everything. You can’t just look at numbers coming off of web analytics and know everything.

I say “I know a lot, but I don’t know everything,” and there are other people that know things, and so I think that while it’s good to have, a healthy scrutiny of what something’s good at and what it’s not good at, I think it is important to think about, to use our favorite geeky term, triangulating all of them, because there’s, out of every question you could answer, no one technique or genre of research is going to answer all of them. So, we all have our piece of the puzzle to take on.

Steve: You said that really well earlier on too when you talked about figuring out the right methodology, and I totally agree. That’s an activity that I’m involved in every time I plan to do something. But also, as you talk, it sort of raises one of my fears for myself and my own career arc I guess, is the tendency to be the person with the hammer that sees everything as a nail.

Obviously there’s many hammers of different sizes and so on, but I think the tools and the approaches are continuing to evolve, and I think the way that you and your team–you have some diversity there in the way that you can “link pinkys,” as you say, to bring many different kinds of approaches to bear to different kinds of problems. It makes me feel a little anxious but I have great respect for how you’re describing it. It sounds great, like that’s just the way to go.

Kerry: Yeah, and I’m laughing because every time I find a new gadget, and there are scads of them coming out every day, it’s the same thing–suddenly the world is a bunch of nails and we have a new hammer.

We have a lot of different tools at our disposal but I also very purposefully keep the toy-box kind of small because they’re often variations on a theme and there’s never been a tool invented in the history of mankind that does exactly what it set out to do or is as automated as they say it is.

So, tools are important. It’s important to choose the right one, but at the end of the day, they also won’t replace that thing inside your skull, which is your most important tool, which is being able to observe and listen and pull the important story out for your user.

Steve: I think that’s the great high point to leave it on. So, let’s say thanks and goodbye. I appreciate your time, and really, this is a great conversation and really insightful stuff, and I thank you for sharing it with me and with everybody.

Kerry: This was a blast. Thanks Steve. I can’t wait to listen all of them coming up after this.

Steve: Very good.

About Steve