The Crowd Classification Problem: Social Dynamics of Binary Choice Accuracy


Abstract in English

Decades of research suggest that information exchange in groups and organizations can reliably improve judgment accuracy in tasks such as financial forecasting, market research, and medical decision-making. However, we show that improving the accuracy of numeric estimates does not necessarily improve the accuracy of decisions. For binary choice judgments, also known as classification tasks--e.g. yes/no or build/buy decisions--social influence is most likely to grow the majority vote share, regardless of the accuracy of that opinion. As a result, initially inaccurate groups become increasingly inaccurate after information exchange even as they signal stronger support. We term this dynamic the crowd classification problem. Using both a novel dataset as well as a reanalysis of three previous datasets, we study this process in two types of information exchange: (1) when people share votes only, and (2) when people form and exchange numeric estimates prior to voting. Surprisingly, when people exchange numeric estimates prior to voting, the binary choice vote can become less accurate even as the average numeric estimate becomes more accurate. Our findings recommend against voting as a form of decision-making when groups are optimizing for accuracy. For those cases where voting is required, we discuss strategies for managing communication to avoid the crowd classification problem. We close with a discussion of how our results contribute to a broader contingency theory of collective intelligence.

Download