Do you want to publish a course? Click here

Modeling Popularity in Asynchronous Social Media Streams with Recurrent Neural Networks

96   0   0.0 ( 0 )
 Added by Swapnil Mishra
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Understanding and predicting the popularity of online items is an important open problem in social media analysis. Considerable progress has been made recently in data-driven predictions, and in linking popularity to external promotions. However, the existing methods typically focus on a single source of external influence, whereas for many types of online content such as YouTube videos or news articles, attention is driven by multiple heterogeneous sources simultaneously - e.g. microblogs or traditional media coverage. Here, we propose RNN-MAS, a recurrent neural network for modeling asynchronous streams. It is a sequence generator that connects multiple streams of different granularity via joint inference. We show RNN-MAS not only to outperform the current state-of-the-art Youtube popularity prediction system by 17%, but also to capture complex dynamics, such as seasonal trends of unseen influence. We define two new metrics: promotion score quantifies the gain in popularity from one unit of promotion for a Youtube video; the loudness level captures the effects of a particular user tweeting about the video. We use the loudness level to compare the effects of a video being promoted by a single highly-followed user (in the top 1% most followed users) against being promoted by a group of mid-followed users. We find that results depend on the type of content being promoted: superusers are more successful in promoting Howto and Gaming videos, whereas the cohort of regular users are more influential for Activism videos. This work provides more accurate and explainable popularity predictions, as well as computational tools for content producers and marketers to allocate resources for promotion campaigns.



rate research

Read More

People differ in how they attend to, interpret, and respond to their surroundings. Convergent processing of the world may be one factor that contributes to social connections between individuals. We used neuroimaging and network analysis to investigate whether the most central individuals in their communities (as measured by in-degree centrality, a notion of popularity) process the world in a particularly normative way. More central individuals had exceptionally similar neural responses to their peers and especially to each other in brain regions associated with high-level interpretations and social cognition (e.g., in the default-mode network), whereas less-central individuals exhibited more idiosyncratic responses. Self-reported enjoyment of and interest in stimuli followed a similar pattern, but accounting for these data did not change our main results. These findings suggest an Anna Karenina principle in social networks: Highly-central individuals process the world in exceptionally similar ways, whereas less-central individuals process the world in idiosyncratic ways.
Although significant effort has been applied to fact-checking, the prevalence of fake news over social media, which has profound impact on justice, public trust and our society, remains a serious problem. In this work, we focus on propagation-based fake news detection, as recent studies have demonstrated that fake news and real news spread differently online. Specifically, considering the capability of graph neural networks (GNNs) in dealing with non-Euclidean data, we use GNNs to differentiate between the propagation patterns of fake and real news on social media. In particular, we concentrate on two questions: (1) Without relying on any text information, e.g., tweet content, replies and user descriptions, how accurately can GNNs identify fake news? Machine learning models are known to be vulnerable to adversarial attacks, and avoiding the dependence on text-based features can make the model less susceptible to the manipulation of advanced fake news fabricators. (2) How to deal with new, unseen data? In other words, how does a GNN trained on a given dataset perform on a new and potentially vastly different dataset? If it achieves unsatisfactory performance, how do we solve the problem without re-training the model on the entire data from scratch? We study the above questions on two datasets with thousands of labelled news items, and our results show that: (1) GNNs can achieve comparable or superior performance without any text information to state-of-the-art methods. (2) GNNs trained on a given dataset may perform poorly on new, unseen data, and direct incremental training cannot solve the problem---this issue has not been addressed in the previous work that applies GNNs for fake news detection. In order to solve the problem, we propose a method that achieves balanced performance on both existing and new datasets, by using techniques from continual learning to train GNNs incrementally.
The contagion dynamics can emerge in social networks when repeated activation is allowed. An interesting example of this phenomenon is retweet cascades where users allow to re-share content posted by other people with public accounts. To model this type of behaviour we use a Hawkes self-exciting process. To do it properly though one needs to calibrate model under consideration. The main goal of this paper is to construct moments method of estimation of this model. The key step is based on identifying of a generator of a Hawkes process. We perform numerical analysis on real data as well.
We address the problem of maximizing user engagement with content (in the form of like, reply, retweet, and retweet with comments)on the Twitter platform. We formulate the engagement forecasting task as a multi-label classification problem that captures choice behavior on an unsupervised clustering of tweet-topics. We propose a neural network architecture that incorporates user engagement history and predicts choice conditional on this context. We study the impact of recommend-ing tweets on engagement outcomes by solving an appropriately defined sweet optimization problem based on the proposed model using a large dataset obtained from Twitter.
Predicting the popularity of online content is a fundamental problem in various application areas. One practical challenge for popularity prediction takes roots in the different settings of popularity prediction tasks in different situations, e.g., the varying lengths of the observation time window or prediction horizon. In other words, a good model for popularity prediction is desired to handle various tasks with different settings. However, the conventional paradigm for popularity prediction is training a separate prediction model for each prediction task, and thus the obtained model for one task is difficult to be generalized to other tasks, causing a great waste of training time and computational resources. To solve this issue, in this paper, we propose a novel pre-training framework for popularity prediction, aiming to pre-train a general deep representation model by learning intrinsic knowledge about popularity dynamics from the readily available diffusion cascades. We design a novel pretext task for pre-training, i.e., temporal context prediction for two randomly sampled time slices of popularity dynamics, impelling the deep prediction model to effectively capture the characteristics of popularity dynamics. Taking the state-of-the-art deep model, i.e., temporal convolutional neural network, as an instantiation of our proposed framework, experimental results conducted on both Sina Weibo and Twitter datasets demonstrate both the effectiveness and efficiency of the proposed pre-training framework for multiple popularity prediction tasks.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا