Do you want to publish a course? Click here

Social media cluster dynamics create resilient global hate highways

127   0   0.0 ( 0 )
 Added by Neil F. Johnson
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Online social media allows individuals to cluster around common interests - including hate. We show that tight-knit social clusters interlink to form resilient global hate highways that bridge independent social network platforms, countries, languages and ideologies, and can quickly self-repair and rewire. We provide a mathematical theory that reveals a hidden resilience in the global axis of hate; explains a likely ineffectiveness of current control methods; and offers improvements. Our results reveal new science for networks-of-networks driven by bipartite dynamics, and should apply more broadly to illicit networks.



rate research

Read More

We introduce a mathematical description of the impact of sociality in the spread of infectious diseases by integrating an epidemiological dynamics with a kinetic modeling of population-based contacts. The kinetic description leads to study the evolution over time of Boltzmann-type equations describing the number densities of social contacts of susceptible, infected and recovered individuals, whose proportions are driven by a classical SIR-type compartmental model in epidemiology. Explicit calculations show that the spread of the disease is closely related to moments of the contact distribution. Furthermore, the kinetic model allows to clarify how a selective control can be assumed to achieve a minimal lockdown strategy by only reducing individuals undergoing a very large number of daily contacts. We conduct numerical simulations which confirm the ability of the model to describe different phenomena characteristic of the rapid spread of an epidemic. Motivated by the COVID-19 pandemic, a last part is dedicated to fit numerical solutions of the proposed model with infection data coming from different European countries.
We show that malicious COVID-19 content, including hate speech, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. Machine learning topic analysis shows quantitatively how online hate communities are weaponizing COVID-19, with topics evolving rapidly and content becoming increasingly coherent. Our mathematical analysis provides a generalized form of the public health R0 predicting the tipping point for multiverse-wide viral spreading, which suggests new policy options to mitigate the global spread of malicious COVID-19 content without relying on future coordination between all online platforms.
Hateful rhetoric is plaguing online discourse, fostering extreme societal movements and possibly giving rise to real-world violence. A potential solution to this growing global problem is citizen-generated counter speech where citizens actively engage in hate-filled conversations to attempt to restore civil non-polarized discourse. However, its actual effectiveness in curbing the spread of hatred is unknown and hard to quantify. One major obstacle to researching this question is a lack of large labeled data sets for training automated classifiers to identify counter speech. Here we made use of a unique situation in Germany where self-labeling groups engaged in organized online hate and counter speech. We used an ensemble learning algorithm which pairs a variety of paragraph embeddings with regularized logistic regression functions to classify both hate and counter speech in a corpus of millions of relevant tweets from these two groups. Our pipeline achieved macro F1 scores on out of sample balanced test sets ranging from 0.76 to 0.97---accuracy in line and even exceeding the state of the art. On thousands of tweets, we used crowdsourcing to verify that the judgments made by the classifier are in close alignment with human judgment. We then used the classifier to discover hate and counter speech in more than 135,000 fully-resolved Twitter conversations occurring from 2013 to 2018 and study their frequency and interaction. Altogether, our results highlight the potential of automated methods to evaluate the impact of coordinated counter speech in stabilizing conversations on social media.
The damaging effects of hate speech on social media are evident during the last few years, and several organizations, researchers and social media platforms tried to harness them in various ways. Despite these efforts, social media users are still affected by hate speech. The problem is even more apparent to social groups that promote public discourse, such as journalists. In this work, we focus on countering hate speech that is targeted to journalistic social media accounts. To accomplish this, a group of journalists assembled a definition of hate speech, taking into account the journalistic point of view and the types of hate speech that are usually targeted against journalists. We then compile a large pool of tweets referring to journalism-related accounts in multiple languages. In order to annotate the pool of unlabeled tweets according to the definition, we follow a concise annotation strategy that involves active learning annotation stages. The outcome of this paper is a novel, publicly available collection of Twitter datasets in five different languages. Additionally, we experiment with state-of-the-art deep learning architectures for hate speech detection and use our annotated datasets to train and evaluate them. Finally, we propose an ensemble detection model that outperforms all individual models.
We address the diffusion of information about the COVID-19 with a massive data analysis on Twitter, Instagram, YouTube, Reddit and Gab. We analyze engagement and interest in the COVID-19 topic and provide a differential assessment on the evolution of the discourse on a global scale for each platform and their users. We fit information spreading with epidemic models characterizing the basic reproduction numbers $R_0$ for each social media platform. Moreover, we characterize information spreading from questionable sources, finding different volumes of misinformation in each platform. However, information from both reliable and questionable sources do not present different spreading patterns. Finally, we provide platform-dependent numerical estimates of rumors amplification.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا