Do you want to publish a course? Click here

How social feedback processing in the brain shapes collective opinion processes in the era of social media

71   0   0.0 ( 0 )
 Added by Sven Banisch
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

What are the mechanisms by which groups with certain opinions gain public voice and force others holding a different view into silence? And how does social media play into this? Drawing on recent neuro-scientific insights into the processing of social feedback, we develop a theoretical model that allows to address these questions. The model captures phenomena described by spiral of silence theory of public opinion, provides a mechanism-based foundation for it, and allows in this way more general insight into how different group structures relate to different regimes of collective opinion expression. Even strong majorities can be forced into silence if a minority acts as a cohesive whole. The proposed framework of social feedback theory (SFT) highlights the need for sociological theorising to understand the societal-level implications of findings in social and cognitive neuroscience.



rate research

Read More

We explore a new mechanism to explain polarization phenomena in opinion dynamics in which agents evaluate alternative views on the basis of the social feedback obtained on expressing them. High support of the favored opinion in the social environment, is treated as a positive feedback which reinforces the value associated to this opinion. In connected networks of sufficiently high modularity, different groups of agents can form strong convictions of competing opinions. Linking the social feedback process to standard equilibrium concepts we analytically characterize sufficient conditions for the stability of bi-polarization. While previous models have emphasized the polarization effects of deliberative argument-based communication, our model highlights an affective experience-based route to polarization, without assumptions about negative influence or bounded confidence.
The pervasive use of social media has grown to over two billion users to date, and is commonly utilized as a means to share information and shape world events. Evidence suggests that passive social media usage (i.e., viewing without taking action) has an impact on the users perspective. This empirical influence over perspective could have significant impact on social events. Therefore, it is important to understand how social media contributes to the formation of an individuals perspective. A set of experimental tasks were designed to investigate empirically derived thresholds for opinion formation as a result of passive interactions with different social media data types (i.e., videos, images, and messages). With a better understanding of how humans passively interact with social media information, a paradigm can be developed that allows the exploitation of this interaction and plays a significant role in future military plans and operations.
Recent studies have shown that online users tend to select information adhering to their system of beliefs, ignore information that does not, and join groups - i.e., echo chambers - around a shared narrative. Although a quantitative methodology for their identification is still missing, the phenomenon of echo chambers is widely debated both at scientific and political level. To shed light on this issue, we introduce an operational definition of echo chambers and perform a massive comparative analysis on more than 1B pieces of contents produced by 1M users on four social media platforms: Facebook, Twitter, Reddit, and Gab. We infer the leaning of users about controversial topics - ranging from vaccines to abortion - and reconstruct their interaction networks by analyzing different features, such as shared links domain, followed pages, follower relationship and commented posts. Our method quantifies the existence of echo-chambers along two main dimensions: homophily in the interaction networks and bias in the information diffusion toward likely-minded peers. We find peculiar differences across social media. Indeed, while Facebook and Twitter present clear-cut echo chambers in all the observed dataset, Reddit and Gab do not. Finally, we test the role of the social media platform on news consumption by comparing Reddit and Facebook. Again, we find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
The increasing pervasiveness of social media creates new opportunities to study human social behavior, while challenging our capability to analyze their massive data streams. One of the emerging tasks is to distinguish between different kinds of activities, for example engineered misinformation campaigns versus spontaneous communication. Such detection problems require a formal definition of meme, or unit of information that can spread from person to person through the social network. Once a meme is identified, supervised learning methods can be applied to classify different types of communication. The appropriate granularity of a meme, however, is hardly captured from existing entities such as tags and keywords. Here we present a framework for the novel task of detecting memes by clustering messages from large streams of social data. We evaluate various similarity measures that leverage content, metadata, network features, and their combinations. We also explore the idea of pre-clustering on the basis of existing entities. A systematic evaluation is carried out using a manually curated dataset as ground truth. Our analysis shows that pre-clustering and a combination of heterogeneous features yield the best trade-off between number of clusters and their quality, demonstrating that a simple combination based on pairwise maximization of similarity is as effective as a non-trivial optimization of parameters. Our approach is fully automatic, unsupervised, and scalable for real-time detection of memes in streaming data.
Our opinions, which things we like or dislike, depend on the opinions of those around us. Nowadays, we are influenced by the opinions of online strangers, expressed in comments and ratings on online platforms. Here, we perform novel academic A/B testing experiments with over 2,500 participants to measure the extent of that influence. In our experiments, the participants watch and evaluate videos on mirror proxies of YouTube and Vimeo. We control the comments and ratings that are shown underneath each of these videos. Our study shows that from 5$%$ up to 40$%$ of subjects adopt the majority opinion of strangers expressed in the comments. Using Bayes theorem, we derive a flexible and interpretable family of models of social influence, in which each individual forms posterior opinions stochastically following a logit model. The variants of our mixture model that maximize Akaike information criterion represent two sub-populations, i.e., non-influenceable and influenceable individuals. The prior opinions of the non-influenceable individuals are strongly correlated with the external opinions and have low standard error, whereas the prior opinions of influenceable individuals have high standard error and become correlated with the external opinions due to social influence. Our findings suggest that opinions are random variables updated via Bayes rule whose standard deviation is correlated with opinion influenceability. Based on these findings, we discuss how to hinder opinion manipulation and misinformation diffusion in the online realm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا