In his excellent book, “The Wisdom of Crowds – Why the Many are Smarter Than the Few”, James Surowiecki demonstrated with fascinating clarity how groups of individuals are more likely to make correct judgements than individuals on their own, including that of individual experts. Not all crowds are, of course, wise. Following the crowd, and “herding” can lead to some spectacularly poor decisions – not least following the majority in an investment bubble of stocks or in the light of the housing crash, buy to let portfolios. In this analysis the intelligent crowd is not a group of individuals or experts interacting, but rather, the aggregate results of individuals as a whole. The average view of the many rather than the few.
What makes a wise crowd?
Surowiecki highlights four elements which differentiate the wise crowd from the irrational crowd. First, that there is a diversity of opinion. Second, that peoples views are independent, and not determined by other people. Third, people are able to draw on local knowledge. Lastly, individual opinions can be turned into collective judgements. When we draw on the average opinion across groups of people, the result is likely to be an accurate or wise view.
Random samples versus Group Discussions
The equivalent of the value of crowds in a market research context is that of large random samples, versus small groups of individuals. Quantitative surveys run among a nationally representative audience of people should theoretically be more reliable than a small, randomly picked group of people. Large quantitative surveys are subject to sampling errors, but these can be accounted for.
When a researcher is seeking to uncover ideas or discover attitudes on a subject or product for the first time, the gathering of qualitative feedback can be highly useful. A common technique is the group discussion (or commonly referred to as a focus group), where a group of people are selected to represent the wider audience (such as the demographics of the general population, or types of user of a particular product) and then probed for their opinions. Most moderators of focus groups will at some stage, have experienced at first hand the variously distorting influence of a domineering participant in a focus group. Such individuals can overtake the conversation, lead and influence others in the group, and ultimately refract what could have been a representative mix of opinions to an unrepresentative, erroneous conclusion. Of course, moderators are aware of this potential problem and can take steps to realign the discussion. But not uncommonly, in a series of focus groups, one of the groups can result in quite different opinions, which are widely divergent from the view of the other groups. The judgement of researchers is crucial in identifying common and key themes, avoiding minority views which are misleading or wrong, yet identifying subtle nuances of minority opinions of real value. Of course, focus groups can deliver real insight and is an excellent pathway to discovering new markets, customers, and ideas. But group views should also be interpreted with caution, and ideally, apparent minority or majority views should be tested rigorously with a wider and representative audience.
General Knowledge and the Great British Public
Earlier in the year Redshift Research was fortunate to work on a TV Quiz Show.
The format of the show was not dissimilar to another classic British Quiz show, Family Fortunes. The quiz participants were asked a series of general knowledge questions, but instead of identifying the most popular answer (which was the basic premise of Family Fortunes), the contestant needed to identify the least popular answer. The maximum points would be awarded for a pointless question – e.g. where the contestant selected the answer identified by no one else. The contestant was effectively competing against the great British public. In order to identify the least commonly identified answers, Redshift conducted extensive polls using its own online access panel, “Crowdology”. Crowdology is a panel of thousands of people which is representative of the UK population by gender, region, and age, and whose opinions can be called upon on any subject, product, event or issue. The format of a quiz show which pitches the individual against the audience, or the wider population is both popular and, in terms of the objective of gauging popular opinions, a technically appropriate methodology. The sceptic might be distrustful of the capabilities of “Jo public”, in their ability to get general knowledge questions right. One of the objectives of the quiz show (and the polling we did) was to identify pointless questions – e.g. where an answer was not correctly identified by anyone. In fact this was not as easy as one might expect. Across a range of general knowledge questions the wider public was able to identify the correct answers, even though they had a strict time limit to answer them, and they were not given the advantage of a multiple choice or coded list. Remember, this was not a sample of experts, or skewed towards a group of the most highly educated or those interested in history, art or science, but a general, albeit nationally representative sample of the great British public, warts and all. And for the most part they got it right!
Neil Cary is a Director of the Market Research Agency, Redshift Research and runs the Online Access Panel, Crowdology