Posts tagged “survey”

The Normal Vibrations

In Jimmyjane’s Sex Change Operation I described how Jimmyjane had used design to normalize and shift the meaning of sex and sexuality away from dirty. Although Jimmyjane isn’t mentioned specifically, this article further illuminates this cultural moment:

“What this tells us is we’ve reached a tipping point,” said Debby Herbenick, an author of the studies along with her Indiana University colleague Michael Reece. “Something once regarded as exotic has become commonplace.”

The surveys, conducted in April 2008 and paid for by Church & Dwight, which makes Trojan condoms and a line of vibrators, document vibrator use and the related sexual practices of 2,056 women and 1,047 men; 93 percent of those surveyed said they are heterosexual.

The researchers attribute the widespread use to easier availability and a cultural shift away from the bad ol’ boy, Triple-X-rated sex toy industry. Vibrators are now sold at Wal-Mart, 7-Eleven and CVS; new Internet sites for sex products feature middle-aged models and aim at mainstream couples. Several companies market sex toys to women as young as sorority sisters and as old as postmenopausal golden girls through Tupperware-style home parties.

“You can now buy your toothpaste, shampoo and vibrator at the local convenience store,” Dr. Herbenick said. “They’re not hidden in a dark corner of some adult store.”

This is the first vibrator research based on a sampling reflective of the nation’s demographic mix, so there is no means of authoritatively measuring changing use over time.

ChittahChattah Quickies

  • NEA Highlights from 2008 Survey of Public Participation In The "Arts" – There are persistent patterns of decline in participation for most art forms such as classical music, jazz, opera, ballet, musical theater, dramatic plays, art museums and craft/visual arts festivals [Seems a rather limited/traditional definition of "art" – no popular music? no stand up comedy?]. Fewer adults are creating and performing art. Weaving and sewing remain popular as crafts, but the percentage of adults who do those activities has declined by 12 points. Only the share of adults doing photography has increased – from 12 percent in 1992 to 15 percent in 2008. Aging audiences are a long-term trend. Performing arts attendees are increasingly older than the average U.S. adult (45). The aging of the baby boom generation does not appear to account for the overall increase in age. Educated Americans are participating less than before, and educated audiences are the most likely to attend or participate in the arts

ChittahChattah Quickies

  • FitFlops – the FlipFlop with the Gym Built In – What we girls really need is something like a flip flop that tones and trims our legs while we run errands. We have no free time…We Want a Workout While We Walk!” FitFlop midsoles incorporate patent-pending microwobbleboard ™ technology, to give you a workout while you walk. One woman reported feeling like she’d had a ‘bum-blasting’ workout after a half an hour of FitFlop-shod walking.

    (Thanks to CPT!)

  • Love Land, first sex theme park in China closed before construction completed – Photographs showed workers pulling down a pair of white plastic legs and hips that appear to be the bottom half of a giant female mannequin towering over the park entrance. The mannequin is wearing a red G-string. The park manager, Lu Xiaoqing, had planned to have on hand naked human sculptures, giant models of genitals, sex technique “workshops” and a photography exhibition about the history of sex. The displays would have included lessons on safe sex and the proper use of condoms. Mr. Lu told China Daily that the park was being built “for the good of the public.” Love Land would be useful for sex education, he said, and help adults “enjoy a harmonious sex life.”
  • Air Traveler Satisfaction Goes Up? Look Beyond The Data – The airline business scored 64 out of 100 in the first quarter of this year, a 3.2% increase over the same period a year ago. Airlines were still among the lowest-scoring businesses in the index, which measured customer satisfaction with the products or services of hotels, restaurants and 14 other sectors. Full-service restaurants scored highest at 84. Airlines scored far below their own index high of 72, achieved in 1994. "It certainly looks like most of these increases, if not all, are due to lower passenger load," says Claes Fornell, professor of business at the University of Michigan and index founder, noting that the recession has kept many Americans from traveling. The lower number of passengers "means more seat availability, shorter lines, more on-time arrival, fewer lost bags, and all that probably adds up to a slightly higher level of satisfaction." He noted that a reduction in the number of flights offered could erase the slight gains achieved in passenger satisfaction.

Are Americans Falling Out of Love with Their Televisions?

The latest Pew study asks about what Americans see as luxuries vs. necessities, as part of a longitudinal study of attitudes towards major categories of goods.

Clear majorities in polls conducted since 1973 have said that their TV set is something they couldn’t do without. Yet the latest Pew Research Center survey suggests Americans’ long love affair with their TV sets may be cooling.

Whether prompted by the recession or by the lure of new computers and other devices that can display TV programs as well as other kinds of streaming video, barely half (52%) of the public now say a television is a necessary part of their lives. That’s a decline of 12 percentage points since 2006 and the lowest proportion since 1973 to view a television as essential — even lower than the 57% who said a TV set was a necessity when the question was first asked in 1973.

Young adults have led the march away from the TV screen: Only 38% of those 30 or younger say a TV is a necessity, a 15-point decline since 2006. In contrast, perceptions of a television set as a necessity declined by just 6 points to 68% among respondents or older

Now far be it for me to impugn Pew (who seem like they do really smart and interesting pulse-taking research), but as of 2007 99% of US households had at least one TV, and the average household had 2.24 sets. So what’s the relationship here between what people say and what people do? If you’ve already got a TV set, how hard is it to say it’s not a necessity? [Of course, more people are getting video content online so that’s part of the reason for the drop and Pew accounts for that, but I’m looking at the other issue]

I think we place a lot of extra importance on self-reported survey data, where people express opinions, out of context. There’s no behavioral data here about what people are actually doing (i.e., selling their TV sets to buy something more important, or holding off buying new TV sets, etc.) If people respond to the question about the importance of the TV in a new way, does that really mean the perception of the TV has changed or does it point to a different way to answer the question?

What do you think this bit of data means? What are the consequences or impacts? Who should be taking notice of it, and what should they do?

Thank you for voting

Thank you for voting, Green Valley, AZ, January 2009

An interesting way to toot one’s own horn. This sign in Papa Murphy’s prominently yet graciously thanks us for voting for them as BEST PIZZA CHAIN in America. To paraphrase Monty Python, I didn’t vote for them. Did you? In fact, a little investigation reveals that this was a customer satisfaction and preference survey by Restaurants & Institutions Magazine. A survey is not an election. No one voted for anything.

R&I’s Consumers’ Choice in Chains survey respondents are a representative sample of U.S. consumers weighted to match the population by age, gender, household income, ethnicity and region. In all, 3,132 adults provided data about their awareness and patronage of more than 200 of the largest U.S. chains. These brands were selected for inclusion based on rankings in R&I’s 2007 Top 400 Chains list. The margin of error for this data is +/- 2%.
To gauge customer loyalty, respondents who patronized a chain in the past year are asked whether they intend to return. In addition, guest satisfaction on eight attributes is measured through customers’ ratings of each chain they patronized. To derive overall scores, performance on the attributes is weighted according to the category. This is done using separate ratings that consumers provide to indicate the importance of each attribute in selecting a restaurant in a given category. The weighted overall score can be used to compare chain performance across segments.

I applaud Papa Murphy for trying to induce a sense of participation in their patrons, reframing an external assessment as something that we can feel some involvement in, thereby sharing in their success. But the fact that the claim doesn’t stand up to just a little bit of scrutiny reveals them to be a little bit dishonest. Almost, but not quite.

See previously: Local Starbucks exhibits passion for their customers

The space between yes and no as a local indicator

While in the UK recently I took advantage of an extremely rare opportunity to tour the long-closed Battersea Power Station. It’s an iconic part of the London landscape, known to many for appearing on the cover of Pink Floyd’s Animals.

The tour was basically a community open house, to try and drum up support/input for the redevelopment plans. Visitors were asked to complete a survey…


…and this question caught my eye:


I really got a kick out of the localized UK English choices for the responses.

Also: see my pictures from the Battersea Power Station here and more of my London and Sheffield pictures here.

Previous posts on surveys:

Experienced pollsters know: people “lie”

As I’ve said before, garbage in, garbage out. From Rob Walker’s Consumed

Recently, Stardoll did a study of its own, polling United States users about their brand preferences. Apparently they saw real-world brands on the same plane as the half-dozen or so invented brands that exist only within the site. (Some respondents even made the – clearly impossible – claim that they wear the strictly digital Goth-style brand Fallen Angel to school.)

These sorts of stories always crop up in market research and business case studies. And they are wonderful because they illustrate the depth of meaning the products, services, brands, and stories we create can be to the people that consume them. So meaningful that they will conflate pretend brands online and tangible experiences offline. Wow, we marvel, that tells you how great our stuff is; they will lie about it.

But the flip side to that is that if you are going to ask people what they think and do and want, you better have a way of triangulating their responses against other data. If you don’t know more about the person than their response, how can you contextualize it? If you don’t know what they are really saying when they answer the question – if they understood the question or are answering it in the way you intended – then you must be very careful in what you conclude and how you act on those answers.

Research screening

I was bemused to see that Feast of Love opened last weekend. Our last time at the movies was when The Simpsons Movie opened, and I participated in some intercept-market research at the theater.

Part of the lobby had been given over to these groovy looking kiosks, with a couple of guys in attendance, asking people who passed by if they would like to give their opinion about an upcoming movie. My age and gender qualified me to participate (woo hoo) and I went with one dude over to a kiosk. I was shown a couple of clips and responded to various questions, but the weirdness of it was that the test was designed have some screens operated by me, and some screens visible only to the interviewer. But they didn’t do it that way. So for various pieces where I was to click within multiple choices, the interviewer, who knew the testing software rather well, just whipped through the keypresses, bam->redraw, bam->redraw, quickly asking me the minimum to move to the next one. Okay, so he took care of it for me. But then this screen we were both looking at would display testing instructions such as ASK PARTICIPANT FOR OPINION OF BENEFIT OF DATE MOVIES. PROBE ON RELATIONSHIP, TIMING, COST. And of course, he wouldn’t even come close, he’d get the one line answer from me, and then he’d type in the quickest condensation of my answer: stay home.

After a minute or so, it became more about the two of us cooperating to use the software to get through test. I realized that my opinion didn’t matter; it’s hard to feel represented in a forced-choice discussion, and it’s unlikely one would continue to provide color when all that gets captured is minimal facts. Further, by exposing the instructions to me, his shortcuts became clear, and I ended up slightly co-opted into the testing process, giving up any sense of really delivering the full truth to this interviewer.

When we see “market research” number published to support some business decision, let’s keep in mind how poorly that data may have been collected (from the concept of how to collect that data, to the implementation of a data collection environment, to the staffing and execution of the data gathering). How reliable could any of this possibly be?

Foreign Grocery Sp@m

Hot on the heels of my Foreign Grocery Museum article in Ambidextrous Magazine, I received a piece of spam informing me of the availability of Poppins cereals in Kuwait.

I really like their enthusiastic descriptions of the benefits provided: true value for the money, great morning start, all the energy it needs, essential for the growth of children, etc.


And if we needed further proof that they were watching this blog, the email asked me to take a survey about Poppins. We [heart] surveys!

Does calling it a report card make it not a survey?

Transit Chief Plans to Ask Riders to Grade Subway and Bus Lines

Riders on each line will be asked to grade different aspects of service, including the cleanliness of cars and stations, safety and the responsiveness of employees.

He said he would also ask riders to list the three things that they thought most need to be improved.

“I want to know what passengers want,” Mr. Roberts said yesterday during a wide-ranging interview that touched on topics as diverse as dirty subway cars and his affinity for the poetry of Robert Frost.

“I think too often people sit around in offices like this and say, ‘O.K., I know better than the customer what it is they want and so this is what we’re going to do.’ I want the customer to drive the priorities.”

…He envisions cards that would be handed out to riders as they exit stations, and which they could fill out and mail in at no cost.

The impulse is good, but broken. Roberts realizes that the truth about riders/customers is not in his office but is “out there.” In the subway. With the riders. The real people.

So what does he do? He sits in his office and creates a piece of paper that will be given to those riders. The paper will be sent back into the office where people in the office will look at the paper and make decisions about what to do.

Why not go out of the office and talk to the riders while they are riding? Take that impulse, Roberts, and follow it to the next level!

Survey Revenge?

I’ve written so much about surveys as of late and was so amused to receive one in the mail from the research arm of my HMO, Kaiser Permanente. The focus of the survey is Genes, Environment, and Health.

It’s a very sophisticated effort, with a two-page FAQ (some parts are interesting/amusing: Does this research involve cloning, or stem cells, or genetic engineering? and What do you mean by “genes” and “environment”?). See Page 1 and Page 2 for the full FAQ.

Here’s the last page, showing questions 30 through 37.
(click to enlarge)

This is a serious survey, it’s obviously been assembled by specialists in medical quantitative research, and has no doubt gone through human subjects approval/ethics guidelines, etc. I’d say this is an example of survey best practice. However, I’m not going to participate; the user experience for me is not pleasant; there’s very little upside (being asked to provide a saliva sample later on? Whoopee!) and although it’s funny out of context to look at question 36 (getting and keeping an erection (or hard-on) that is rigid enough…), it’s a not so interesting (or turn-off) to fill out a survey about that sort of thing.

I really want to say that the whole survey seems long, and hard. But that would bring the level of discourse way way way down on this blog and I pride myself on setting high standards here.

Son of survey

This comment in the bad survey design thread got me thinking further about where/when/what to do with surveys. It’s not my primary tool so some of these reflections take me a little longer than someone who makes their living as a quantitative researcher, for example.

A tiny new restaurant opened in our tiny town of Montara – the Montara Bistro. I dropped by yesterday to pick up a menu and saw that they are already looking for customer feedback.


So to some folks, this is a survey. But it’s next to useless.

Why? Their questions are not too bad, but they are conversational questions, and should be presented that way. They are the basis of a conversation. Handing someone a sheet of paper (with no room to fill in any response) and asking them for essays is ineffective. It’s not fair. These are the questions they want answers to, but sometimes you have to ask a series of questions to get that information. And you can’t decide ahead of time which questions to ask. You have to ask a question, listen to the response, and then choose your next one. You can’t do that on a piece of paper. You need to have real people talking to each other and exploring the issues that way.

Not to mention that the restaurant has been open for a day or two, and there’s a presumption of an in-depth relationship that hasn’t really been built yet. What do I think of the Bistro Vision? Ummm, I don’t care.

I love what this artifact tells you about the company; that they really want to get a dialog going. They don’t have the tools in place to do it yet. Maybe it’s backed up by the way they interact with customers who come in; I don’t know. But this won’t work at all.

And I think this sort of inquiry is what a lot of design students are doing; identifying some open-ended (i.e., requires the respondent to write sentences) questions and sending them out by email. Some people will respond. Some may even write a lot. But you can’t follow up unless you send out another email. And then it’s just a conversation.

As with everything you “send out” who it gets sent to is a factor. Sending something to 3 friends is a very different approach than something that is quantitative in nature.

Look at this artifact from a recent project (created by our partners, not us):

This contained 31 questions, only a few open-ended ones. There’s randomization where needed (so you can filter out order-effects, where the first or last item might be picked more frequently in a list), and a large enough sample so that results can be processed to lead to conclusions – comparisons between different factors (this is the stats part I’ve been talking about).

Attitude toward technology meets Age
Purchase habits meets Region (with Age)
Stores shopped meets Region (with Age)

Tons of work and tons of math goes into creating tables (that then get interpreted) like

As Paul Hogan (sorta) said “That’s not a survey, now that’s a survey!”

I hope this brings a bit more clarity to the discussion.

Class Acts

Recently I wrote about a Bad Survey, exhorting design instructors to pay more attention to why and how surveys are being used by their students. I was reacting to a pretty bad example I had encountered and I ripped on it and offered some suggestions for improvement.

The students involved in creating the survey contacted me, thanked me for the input and asked if we could set up some time to talk. We spoke yesterday, and I was so impressed. Standard reaction when someone criticizes someone else (online, I guess) is to lash out. Ideally in as inarticulate a fashion as possible. But these four folks were awesome – they got on the phone with me with specific questions, not about just the survey, but about research methods in general (something they are obviously not getting from their program!), about careers, and identities when our skill sets span traditional disciplines (hello, it’s The Overlap). They asked me about my background, and shared theirs with me. They respected my time and they all sent thank you notes afterwards.

Their were no embarrassment on their part or discomfort on my part for having criticized their work. And frankly, they set that tone in how they approached it. I didn’t feel weird about what I had written now that I had names and voices to go with the work, because they sincerely expressed appreciation for the help in making improvements.

I’ve met with a lot of students, career changers, young designers, future researchers and so on. Most handle it very well, but there was something exceptional here, given how it started, and that they turned the whole thing into a win-win through humility and curiosity. They invited me to visit their school next time I’m in their area.

We couldn’t cover all that and get really deep into research, the philosophy, the tools, the approaches, and so on (and of course, that’s more than a phone call) – but they are working hard to understand how the different tools (say, an open-ended interview, and a closed-ended survey) can address different design questions at different phases of a project. One gently amusing moment was a discussion of leading questions – one student had assumed this meant a question that led naturally to the next one (and of course, that’s a good thing) rather than, as I explained, a question that presumes a certain point of view and ultimately makes it harder for the person to give a true answer (i.e., Aren’t you disappointed that Jon Stewart didn’t host the Academy Awards?).

Learning is not just the information and experience you have or don’t have (and it’s clear that there’s some crucial info these folks don’t yet have)…it’s an approach to the people around you, indeed the world around you, and I’m so excited by the approach that this group of students is taking.

Even “good” surveys suck

They suck for us, the people who respond to them. Who wants to participate in something like this? (recorded after my conversation with a United rep to resolve the ticket cancellation I blogged the other day)

It’s not unusual at the conclusion of conversational bit of research (something interactive, interpersonal, and listening) for the person being interviewed to thank the person running the interview…at first you might think “you’re thanking ME? But I got great info from you…” but the fact is that an interview done well is a pleasure to participate in.

These surveys aren’t really interested in you; they’ve already forced you to think and respond in a completely unnatural and awkward fashion.

Sure, surveys do have their place and I’m being pejorative here only to highlight the bad user experience being created by this sort of research.

Bad Survey Design. Please Stop!

A plea to all design educators out there (and to students as well): please stop using crappy surveys as a substitute for actual research.

Survey design is a craft. If you haven’t studied it, you don’t know how to write a survey well, and the data you get is garbage. Surveys are quantitative tools. They require math to plan (what does your sample size need to be to ensure that your results are valid?) and to analyze (regression analysis (or any other buzzword) anyone?). They are very tough to write. Questions have to be worded correctly and sequenced correctly.

Yet design instructors constantly send their students onto the Internet to “do research.” Students spend about 30 seconds writing open-ended questions about their issues, and then blast the “survey” off to email lists populated by other designers. And so in the spirit of helping a good cause, people might respond. But the questions are vague, hard to answer, and not at all controlled.

Garbage in, garbage out.

Today I received a forward from a colleague who has his pulse on global design issues, passing on a survey request from a graduate supervisor at a prestigious east coast US design school. Doubly-endorsed, then, with an intro by the students

We are one of the thesis research teams from the “Design Management” masters program at REDACTED. We are comprised of four dynamic individuals who bring unique set of skills and expertise that substantiates our team. We are highly motivated and eager to seek out credible information.

The research is focused on “Bottled Water” and its affects [sic] on our planet. In the times when the world is focusing on oil as a momentous energy resource that is on the verge of gaining the status of a deficient commodity, this thesis team is exploring indications that cognize [sic] drinking water as a much more serious and fateful resource. With a pragmatic attitude the team’s primary focus is on the bottled water industry and its impact on life, environment and economies. By 2015 over 60% of the world population will be living in urban areas and the use of bottled water is increasing by 12% per annum.

This survey is conceived and designed by the team to get firsthand information in order to understand the trends, perceptions and know-how of people worldwide. It is critically helpful for the team in securing a better perspective of the thought process, gaps, and awareness levels. The survey will be used as part of the thesis research and one of the pillars to base strategic and sustainable recommendation by using Design Management tools.

The team looks forward to your support and cooperation in reaching its goals. This survey will also create way for the future researchers who would be able to use these finding to elaborate and continue the process of strategic enlightenment and making the planet a better place for the generation to come.

Well. Is that preeningly snooty enough for you (ignoring the typos, of course)? Certainly some high expectations have been established here. So, let’s look at two pages from the online survey that we simply must contribute to.


Oh yuck. This is terrible. After (not shown) a lot of demographic and behavioral data (how old are you, how much money do you make, how much do you spend on bottled water every day, etc.) we get to the opinion and perception questions. Except these are ridiculous leading questions that reveal the opinions of the survey writers, and place the respondent in an awkward situation.

Do you believe that water can be more expensive than oil? gives away the game. Selfishly earnest, but also ineffective.

Maybe, try, something like this (very rough)

For each of the following, compare your expectation of its price to water

Much more costlymore costlysamecheapermuch cheaper
orange juice

The question mustn’t reveal the intention. And it must not (as the last 3 questions do) put the person on the defensive, implying that they should be doing something in a certain way. It’s not ethical, but it’s just not effective.

Again, I see design students doing this all the time. It’s really bad research, and it’s sadly being endorsed or encouraged by faculty and others who don’t know or don’t care. “Oh, it’s still useful information” I can hear them saying. But it’s not. The data you get from this is useless, or worse than useless since it’s actually misleading.

This example is more egregious because of the smarmy greener-than-thou effluent it exudes.

Ideally, the kind of perception issues these students are after would be collected in some conversations, where follow-ups and probes and listening all come together to generate some new insight. This is not a good use of a survey, especially in such a ham-fisted manner.


About Steve