Do you want to publish a course? Click here

After Sandy Hook Elementary: A Year in the Gun Control Debate on Twitter

100   0   0.0 ( 0 )
 Added by Adrian Benton
 Publication date 2016
and research's language is English
 Authors Adrian Benton




Ask ChatGPT about the research

The mass shooting at Sandy Hook elementary school on December 14, 2012 catalyzed a year of active debate and legislation on gun control in the United States. Social media hosted an active public discussion where people expressed their support and opposition to a variety of issues surrounding gun legislation. In this paper, we show how a content-based analysis of Twitter data can provide insights and understanding into this debate. We estimate the relative support and opposition to gun control measures, along with a topic analysis of each camp by analyzing over 70 million gun-related tweets from 2013. We focus on spikes in conversation surrounding major events related to guns throughout the year. Our general approach can be applied to other important public health and political issues to analyze the prevalence and nature of public opinion.



rate research

Read More

This article analyses public debate on Twitter via network representations of retweets and replies. We argue that tweets observable on Twitter have both a direct and mediated effect on the perception of public opinion. Through the interplay of the two networks, it is possible to identify potentially misleading representations of public opinion on the platform. The method is employed to observe public debate about two events: The Saxon state elections and violent riots in the city of Leipzig in 2019. We show that in both cases, (i) different opinion groups exhibit different propensities to get involved in debate, and therefore have unequal impact on public opinion. Users retweeting far-right parties and politicians are significantly more active, hence their positions are disproportionately visible. (ii) Said users act significantly more confrontational in the sense that they reply mostly to users from different groups, while the contrary is not the case.
In addition to posting news and status updates, many Twitter users post questions that seek various types of subjective and objective information. These questions are often labeled with Q&A hashtags, such as #lazyweb or #twoogle. We surveyed Twitter users and found they employ these Q&A hashtags both as a topical signifier (this tweet needs an answer!) and to reach out to those beyond their immediate followers (a community of helpful tweeters who monitor the hashtag). However, our log analysis of thousands of hashtagged Q&A exchanges reveals that nearly all replies to hashtagged questions come from a users immediate follower network, contradicting users beliefs that they are tapping into a larger community by tagging their question tweets. This finding has implications for designing next-generation social search systems that reach and engage a wide audience of answerers.
The ongoing Coronavirus (COVID-19) pandemic highlights the inter-connectedness of our present-day globalized world. With social distancing policies in place, virtual communication has become an important source of (mis)information. As increasing number of people rely on social media platforms for news, identifying misinformation and uncovering the nature of online discourse around COVID-19 has emerged as a critical task. To this end, we collected streaming data related to COVID-19 using the Twitter API, starting March 1, 2020. We identified unreliable and misleading contents based on fact-checking sources, and examined the narratives promoted in misinformation tweets, along with the distribution of engagements with these tweets. In addition, we provide examples of the spreading patterns of prominent misinformation tweets. The analysis is presented and updated on a publically accessible dashboard (https://usc-melady.github.io/COVID-19-Tweet-Analysis) to track the nature of online discourse and misinformation about COVID-19 on Twitter from March 1 - June 5, 2020. The dashboard provides a daily list of identified misinformation tweets, along with topics, sentiments, and emerging trends in the COVID-19 Twitter discourse. The dashboard is provided to improve visibility into the nature and quality of information shared online, and provide real-time access to insights and information extracted from the dataset.
On social media platforms, like Twitter, users are often interested in gaining more influence and popularity by growing their set of followers, aka their audience. Several studies have described the properties of users on Twitter based on static snapshots of their follower network. Other studies have analyzed the general process of link formation. Here, rather than investigating the dynamics of this process itself, we study how the characteristics of the audience and follower links change as the audience of a user grows in size on the road to users popularity. To begin with, we find that the early followers tend to be more elite users than the late followers, i.e., they are more likely to have verified and expert accounts. Moreover, the early followers are significantly more similar to the person that they follow than the late followers. Namely, they are more likely to share time zone, language, and topics of interests with the followed user. To some extent, these phenomena are related with the growth of Twitter itself, wherein the early followers tend to be the early adopters of Twitter, while the late followers are late adopters. We isolate, however, the effect of the growth of audiences consisting of followers from the growth of Twitters user base itself. Finally, we measure the engagement of such audiences with the content of the followed user, by measuring the probability that an early or late follower becomes a retweeter.
Far-right actors are often purveyors of Islamophobic hate speech online, using social media to spread divisive and prejudiced messages which can stir up intergroup tensions and conflict. Hateful content can inflict harm on targeted victims, create a sense of fear amongst communities and stir up intergroup tensions and conflict. Accordingly, there is a pressing need to better understand at a granular level how Islamophobia manifests online and who produces it. We investigate the dynamics of Islamophobia amongst followers of a prominent UK far right political party on Twitter, the British National Party. Analysing a new data set of five million tweets, collected over a period of one year, using a machine learning classifier and latent Markov modelling, we identify seven types of Islamophobic far right actors, capturing qualitative, quantitative and temporal differences in their behaviour. Notably, we show that a small number of users are responsible for most of the Islamophobia that we observe. We then discuss the policy implications of this typology in the context of social media regulation.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا