[Off-Topic] Most people said they'd use my product. Will it work? No!

PT | EN
March 11, 2013 · 💬 Join the Discussion

First published on 2013-03-13T14:06:39+00:00

“We’re putting together an excellent new product that will solve your problems — would you use this product?” says the validation survey of a new startup. Result: 70% of the 1,000 people who answered the survey said yes. So the conclusion is clear that the product is not only valid but can’t go wrong, right?

Wrong! Those numbers don’t mean anything. By themselves they have no validity, and you can’t reach any kind of conclusion from them. “But it’s 1,000 people! How can you say that?” This article doesn’t intend to be statistically rigorous, just to explain a few (of the many) points where everyone these days is going wrong when doing surveys.

First of all, what’s your product’s target audience? Let’s say it’s the population of the interior of the southern states, classes C and D, 50 to 60 years old, with little experience using computers in general. Now let’s say the question above was part of a survey done online, shared on Facebook, Twitter. Who answered that survey? Certainly a lot of people from classes A through C, mostly involving the population of southeastern capitals, ages 16 to 25, who use computers daily. So you asked the wrong question to the wrong people.

For the sake of the example, let’s say you took care to choose the correct population and avoided what’s known as “Biased Sampling.” “We’re putting together an excellent new product that will solve your problems — would you use this product?” Now, that’s not the best way to ask, and it should be obvious why: it’s what we call a “Loaded Question,” that is, the question encourages a certain kind of answer. To start with, it begins with the wrong premise that the new product is “excellent” and that it “will solve” the problems. A less loaded form should be: “We’re putting together a product that will have features X — assuming you have problems Y, do you think this will help solve them?”

There are several techniques and points to consider in order to avoid baseless premises (like adding unnecessary rhetoric), like correctly explaining the value proposition. It’s very easy to create a fallacious question that leads to a false answer. Even worse if you now take those 1,000 people surveyed, separate the 700 who said “yes” to the fallacious question asked of the wrong audience, and start drawing conclusions from it. “Oh, we noticed that of these 700, half have credit cards, usually shop online at least once a month, and said they’d share the service with at least 2 friends on social networks.” Even worse if those who said “no” are precisely the audience closest to the target audience you hope to reach.

I also consulted Ricardo Couto* who explained the issue. “Knowing the respondent, their habits and motivations is much more valuable for the business than having a high percentage of favorable responses about the product or service in an online questionnaire. By the way, calling a form put together in 5 minutes on Google Docs or SurveyMonkey a ‘survey’ is, at best, naïve. What you have there is a questionnaire. Questionnaires can be useful if used well. But in general, they’re just sets of questions around a subject. A survey can even make use of that tool, but not every questionnaire is a survey.”

Statistics and probabilities, surveys in general, should be done only by professionals who understand this entire field of knowledge. It’s not something trivial and shouldn’t be treated lightly. This simple example is just one of the simplest ways to fool yourself with numbers. Mark Twain popularized the phrase “There are 3 kinds of lies: lies, damned lies, and statistics.” And this subject has been so widely discussed and rebutted that there’s a famous 1954 book by Darrell Huff called “How to Lie with Statistics,” which illustrates intentional and unintentional errors associated with the interpretation of statistics.

Ricardo also added: “If we think of an online questionnaire as a serious tool for research, first you have to know how research is done. I have a lot of doubts whether the people who say they ‘are doing’ a survey and send you a link to a questionnaire have taken into account the order effect, the observer effect, or avoided biased questions — factors well known to anyone who does real research. And I’m not even commenting on statistical validity, which is only achieved by applying statistical methods (a simple percentage isn’t descriptive statistics, okay? you have to know what a p-value is), or even on the selection of respondents appropriate to the target audience you’re trying to reach. Or do I want as users, customers, or consumers only professionals my age, my income range, and my online shopping and social media usage behavior?”

So, next time you think about creating an account on one of the dozens of online survey services, see if it minimally meets the requirements of a well-done survey and guides you in the correct procedures to design surveys, select the audience, collect, process, and analyze the data rigorously and correctly — otherwise you’ll have only opaque numbers in your hands that contain no meaning and from which absolutely no conclusions can be drawn.

*Ricardo Couto is a Consultant in User Experience and Research, Mentor at Aceleratech, President of the Interaction Designers Association SP (IxDA SP), and Member of the Brazilian Information Design Society. He studied Cognitive Psychology (master's) at the Federal University of Pernambuco and specializations in Information Design, Higher Education Teaching, and Distance Education Technology. He was a university professor at undergraduate and graduate level for 7 years.