No Arabic abstract
A growing number of empirical studies suggest that negative advertising is effective in campaigning, while the mechanisms are rarely mentioned. With the scandal of Cambridge Analytica and Russian intervention behind the Brexit and the 2016 presidential election, people have become aware of the political ads on social media and have pressured congress to restrict political advertising on social media. Following the related legislation, social media companies began disclosing their political ads archive for transparency during the summer of 2018 when the midterm election campaign was just beginning. This research collects the data of the related political ads in the context of the U.S. midterm elections since August to study the overall pattern of political ads on social media and uses sets of machine learning methods to conduct sentiment analysis on these ads to classify the negative ads. A novel approach is applied that uses AI image recognition to study the image data. Through data visualization, this research shows that negative advertising is still the minority, Republican advertisers and third party organizations are more likely to engage in negative advertising than their counterparts. Based on ordinal regressions, this study finds that anger evoked information-seeking is one of the main mechanisms causing negative ads to be more engaging and effective rather than the negative bias theory. Overall, this study provides a unique understanding of political advertising on social media by applying innovative data science methods. Further studies can extend the findings, methods, and datasets in this study, and several suggestions are given for future research.
Businesses communicate using Twitter for a variety of reasons -- to raise awareness of their brands, to market new products, to respond to community comments, and to connect with their customers and potential customers in a targeted manner. For businesses to do this effectively, they need to understand which content and structural elements about a tweet make it influential, that is, widely liked, followed, and retweeted. This paper presents a systematic methodology for analyzing commercial tweets, and predicting the influence on their readers. Our model, which use a combination of decoration and meta features, outperforms the prediction ability of the baseline model as well as the tweet embedding model. Further, in order to demonstrate a practical use of this work, we show how an unsuccessful tweet may be engineered (for example, reworded) to increase its potential for success.
Companies and financial investors are paying increasing attention to social consciousness in developing their corporate strategies and making investment decisions to support a sustainable economy for the future. Public discussion on incidents and events -- controversies -- of companies can provide valuable insights on how well the company operates with regards to social consciousness and indicate the companys overall operational capability. However, there are challenges in evaluating the degree of a companys social consciousness and environmental sustainability due to the lack of systematic data. We introduce a system that utilizes Twitter data to detect and monitor controversial events and show their impact on market volatility. In our study, controversial events are identified from clustered tweets that share the same 5W terms and sentiment polarities of these clusters. Credible news links inside the event tweets are used to validate the truth of the event. A case study on the Starbucks Philadelphia arrests shows that this method can provide the desired functionality.
Problem definition: Corporate brands, grassroots activists, and ordinary citizens all routinely employ Word-of-mouth (WoM) diffusion to promote products and instigate social change. Our work models the formation and spread of negative attitudes via WoM on a social network represented by a directed graph. In an online learning setting, we examine how an agent could simultaneously learn diffusion parameters and choose sets of seed users to initiate diffusions and maximize positive influence. In contrast to edge-level feedback, in which an agent observes the relationship (edge) through which a user (node) is influenced, we more realistically assume node-level feedback, where an agent only observes when a user is influenced and whether that influence is positive or negative. Methodology/results: We propose a new class of negativity-aware Linear Threshold Models. We show that in these models, the expected positive influence spread is a monotone submodular function of the seed set. Therefore, when maximizing positive influence by selecting a seed set of fixed size, a greedy algorithm can guarantee a solution with a constant approximation ratio. For the online learning setting, we propose an algorithm that runs in epochs of growing lengths, each consisting of a fixed number of exploration rounds followed by an increasing number of exploitation rounds controlled by a hyperparameter. Under mild assumptions, we show that our algorithm achieves asymptotic expected average scaled regret that is inversely related to any fractional constant power of the number of rounds. Managerial implications: During seed selection, our negativity-aware models and algorithms allow WoM campaigns to discover and best account for characteristics of local users and propagated content. We also give the first algorithms with regret guarantees for influence maximization under node-level feedback.
Detecting and suspending fake accounts (Sybils) in online social networking (OSN) services protects both OSN operators and OSN users from illegal exploitation. Existing social-graph-based defense schemes effectively bound the accepted Sybils to the total number of social connections between Sybils and non-Sybil users. However, Sybils may still evade the defenses by soliciting many social connections to real users. We propose SybilFence, a system that improves over social-graph-based Sybil defenses to further thwart Sybils. SybilFence is based on the observation that even well-maintained fake accounts inevitably receive a significant number of user negative feedback, such as the rejections to their friend requests. Our key idea is to discount the social edges on users that have received negative feedback, thereby limiting the impact of Sybils social edges. The preliminary simulation results show that our proposal is more resilient to attacks where fake accounts continuously solicit social connections over time.
COVID-19 pandemic has generated what public health officials called an infodemic of misinformation. As social distancing and stay-at-home orders came into effect, many turned to social media for socializing. This increase in social media usage has made it a prime vehicle for the spreading of misinformation. This paper presents a mechanism to detect COVID-19 health-related misinformation in social media following an interdisciplinary approach. Leveraging social psychology as a foundation and existing misinformation frameworks, we defined misinformation themes and associated keywords incorporated into the misinformation detection mechanism using applied machine learning techniques. Next, using the Twitter dataset, we explored the performance of the proposed methodology using multiple state-of-the-art machine learning classifiers. Our method shows promising results with at most 78% accuracy in classifying health-related misinformation versus true information using uni-gram-based NLP feature generations from tweets and the Decision Tree classifier. We also provide suggestions on alternatives for countering misinformation and ethical consideration for the study.