Should we trust the wisdom of the crowds?
As market researchers we find it counter-intuitive to believe that a large group of random individuals may collectively give us more accurate responses than a handful of experts. But that’s exactly what the wisdom of crowds theory tells us to do.
It has been around for over a century but in market research has only really come to the forefront since James Surowiecki’s 2005 book, The Wisdom of the Crowds. CMR already has clients using crowd based methodologies, but it takes a real leap of faith. Is the collective intelligence (or otherwise!) of a random selection of the public really a substitute for the years of knowledge and first-hand experience of a select group of KOLs? Should we really believe that just because a sample is significant in size, it can provide more accurate insights than one made up exclusively of patients with a particular condition?
The theory is that by asking a diverse group you lose all of the subjectivity associated with the views of an “expert”. It suggests that the collective thinking of a large group is often smarter than the views of the smartest people within them.
And it is not just based around people having similar opinions; the wider the range of opinions the better. Diversity avoids what Surowiecki terms “groupthink” and allows people to stick their necks out. Conversely it is argued that research conducted with smaller groups requires careful management to steer it away from herd-like behaviour and this management in itself causes bias in the results. The theory suggests that whereas large groups come to the right conclusion based on the diversity of responses, small groups lack wisdom because everyone will conform to the response or behaviour that is expected.
Using crowds also has the advantage that it is potentially significantly easier and cheaper than finding a sample of hard to reach KOLs or patients.
At a simple level we’ve relied on decisions made by juries rather than judges in courts of law for hundreds of years, but it’s probably fair to say they don’t always come to the right conclusion.
It’s also fair to say that the methodology only works well when there is a “correct” answer. This means it is better at giving us direction around mathematical conundrums or highly objective arguments.
Surowiecki gives compelling evidence to support the theory from the accuracy of the average response of 800 country fair attendees guessing the weight of an ox in 1906 to Ask the Audience coming up with the correct response on Who Wants to be a Millionaire on 91% of occasions compared to Phone a Friend at 66%.
CMR’s view is that it’s a theory with “legs”. We’re all for its use, particularly in instances where certain expert groups are becoming over-researched. But what do you think of this approach? Is it something you would use and where could you see it having greatest benefit?
Article written by George Ashford – CEO