Do you want to publish a course? Click here

Wisdom of the Confident: Using Social Interactions to Eliminate the Bias in Wisdom of the Crowds

117   0   0.0 ( 0 )
 Added by Walter S. Lasecki
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Human groups can perform extraordinary accurate estimations compared to individuals by simply using the mean, median or geometric mean of the individual estimations [Galton 1907, Surowiecki 2005, Page 2008]. However, this is true only for some tasks and in general these collective estimations show strong biases. The method fails also when allowing for social interactions, which makes the collective estimation worse as individuals tend to converge to the biased result [Lorenz et al. 2011]. Here we show that there is a bright side of this apparently negative impact of social interactions into collective intelligence. We found that some individuals resist the social influence and, when using the median of this subgroup, we can eliminate the bias of the wisdom of the full crowd. To find this subgroup of individuals more confident in their private estimations than in the social influence, we model individuals as estimators that combine private and social information with different relative weights [Perez-Escudero & de Polavieja 2011, Arganda et al. 2012]. We then computed the geometric mean for increasingly smaller groups by eliminating those using in their estimations higher values of the social influence weight. The trend obtained in this procedure gives unbiased results, in contrast to the simpler method of computing the median of the complete group. Our results show that, while a simple operation like the mean, median or geometric mean of a group may not allow groups to make good estimations, a more complex operation taking into account individuality in the social dynamics can lead to a better collective intelligence.



rate research

Read More

Wisdom of crowds refers to the phenomenon that the average opinion of a group of individuals on a given question can be very close to the true answer. It requires a large group diversity of opinions, but the collective error, the difference between the average opinion and the true value, has to be small. We consider a stochastic opinion dynamics where individuals can change their opinion based on the opinions of others (social influence $alpha$), but to some degree also stick to their initial opinion (individual conviction $beta$). We then derive analytic expressions for the dynamics of the collective error and the group diversity. We analyze their long-term behavior to determine the impact of the two parameters $(alpha,beta)$ and the initial opinion distribution on the wisdom of crowds. This allows us to quantify the ambiguous role of social influence: only if the initial collective error is large, it helps to improve the wisdom of crowds, but in most cases it deteriorates the outcome. In these cases, individual conviction still improves the wisdom of crowds because it mitigates the impact of social influence.
A folksonomy is ostensibly an information structure built up by the wisdom of the crowd, but is the crowd really doing the work? Tagging is in fact a sharply skewed process in which a small minority of supertagger users generate an overwhelming majority of the annotations. Using data from three large-scale social tagging platforms, we explore (a) how to best quantify the imbalance in tagging behavior and formally define a supertagger, (b) how supertaggers differ from other users in their tagging patterns, and (c) if effects of motivation and expertise inform our understanding of what makes a supertagger. Our results indicate that such prolific users not only tag more than their counterparts, but in quantifiably different ways. Specifically, we find that supertaggers are more likely to label content in the long tail of less popular items, that they show differences in patterns of content tagged and terms utilized, and are measurably different with respect to tagging expertise and motivation. These findings suggest we should question the extent to which folksonomies achieve crowdsourced classification via the wisdom of the crowd, especially for broad folksonomies like Last.fm as opposed to narrow folksonomies like Flickr.
We propose an agent-based model of collective opinion formation to study the wisdom of crowds under social influence. The opinion of an agent is a continuous positive value, denoting its subjective answer to a factual question. The wisdom of crowds states that the average of all opinions is close to the truth, i.e. the correct answer. But if agents have the chance to adjust their opinion in response to the opinions of others, this effect can be destroyed. Our model investigates this scenario by evaluating two competing effects: (i) agents tend to keep their own opinion (individual conviction $beta$), (ii) they tend to adjust their opinion if they have information about the opinions of others (social influence $alpha$). For the latter, two different regimes (full information vs. aggregated information) are compared. Our simulations show that social influence only in rare cases enhances the wisdom of crowds. Most often, we find that agents converge to a collective opinion that is even farther away from the true answer. So, under social influence the wisdom of crowds can be systematically wrong.
The average portfolio structure of institutional investors is shown to have properties which account for transaction costs in an optimal way. This implies that financial institutions unknowingly display collective rationality, or Wisdom of the Crowd. Individual deviations from the rational benchmark are ample, which illustrates that system-wide rationality does not need nearly rational individuals. Finally we discuss the importance of accounting for constraints when assessing the presence of Wisdom of the Crowd.
In this study we focus on the prediction of basketball games in the Euroleague competition using machine learning modelling. The prediction is a binary classification problem, predicting whether a match finishes 1 (home win) or 2 (away win). Data is collected from the Euroleagues official website for the seasons 2016-2017, 2017-2018 and 2018-2019, i.e. in the new format era. Features are extracted from matches data and off-the-shelf supervised machine learning techniques are applied. We calibrate and validate our models. We find that simple machine learning models give accuracy not greater than 67% on the test set, worse than some sophisticated benchmark models. Additionally, the importance of this study lies in the wisdom of the basketball crowd and we demonstrate how the predicting power of a collective group of basketball enthusiasts can outperform machine learning models discussed in this study. We argue why the accuracy level of this group of experts should be set as the benchmark for future studies in the prediction of (European) basketball games using machine learning.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا