No Arabic abstract
This study provides a large-scale mapping of the French media space using digital methods to estimate political polarization and to study information circuits. We collect data about the production and circulation of online news stories in France over the course of one year, adopting a multi-layer perspective on the media ecosystem. We source our data from websites, Twitter and Facebook. We also identify a certain number of important structural features. A stochastic block model of the hyperlinks structure shows the systematic rejection of counter-informational press in a separate cluster which hardly receives any attention from the mainstream media. Counter-informational sub-spaces are also peripheral on the consumption side. We measure their respective audiences on Twitter and Facebook and do not observe a large discrepancy between both social networks, with counter-information space, far right and far left media gathering limited audiences. Finally, we also measure the ideological distribution of news stories using Twitter data, which also suggests that the French media landscape is quite balanced. We therefore conclude that the French media ecosystem does not suffer from the same level of polarization as the US media ecosystem. The comparison with the American situation also allows us to consolidate a result from studies on disinformation: the polarization of the journalistic space and the circulation of fake news are phenomena that only become more widespread when dominant and influential actors in the political or journalistic space spread topics and dubious content originally circulating in the fringe of the information space.
Social media sites are information marketplaces, where users produce and consume a wide variety of information and ideas. In these sites, users typically choose their information sources, which in turn determine what specific information they receive, how much information they receive and how quickly this information is shown to them. In this context, a natural question that arises is how efficient are social media users at selecting their information sources. In this work, we propose a computational framework to quantify users efficiency at selecting information sources. Our framework is based on the assumption that the goal of users is to acquire a set of unique pieces of information. To quantify users efficiency, we ask if the user could have acquired the same pieces of information from another set of sources more efficiently. We define three different notions of efficiency -- link, in-flow, and delay -- corresponding to the number of sources the user follows, the amount of (redundant) information she acquires and the delay with which she receives the information. Our definitions of efficiency are general and applicable to any social media system with an underlying information network, in which every user follows others to receive the information they produce. In our experiments, we measure the efficiency of Twitter users at acquiring different types of information. We find that Twitter users exhibit sub-optimal efficiency across the three notions of efficiency, although they tend to be more efficient at acquiring non-popular than popular pieces of information. We then show that this lack of efficiency is a consequence of the triadic closure mechanism by which users typically discover and follow other users in social media. Finally, we develop a heuristic algorithm that enables users to be significantly more efficient at acquiring the same unique pieces of information.
There has been a tremendous rise in the growth of online social networks all over the world in recent years. It has facilitated users to generate a large amount of real-time content at an incessant rate, all competing with each other to attract enough attention and become popular trends. While Western online social networks such as Twitter have been well studied, the popular Chinese microblogging network Sina Weibo has had relatively lower exposure. In this paper, we analyze in detail the temporal aspect of trends and trend-setters in Sina Weibo, contrasting it with earlier observations in Twitter. We find that there is a vast difference in the content shared in China when compared to a global social network such as Twitter. In China, the trends are created almost entirely due to the retweets of media content such as jokes, images and videos, unlike Twitter where it has been shown that the trends tend to have more to do with current global events and news stories. We take a detailed look at the formation, persistence and decay of trends and examine the key topics that trend in Sina Weibo. One of our key findings is that retweets are much more common in Sina Weibo and contribute a lot to creating trends. When we look closer, we observe that most trends in Sina Weibo are due to the continuous retweets of a small percentage of fraudulent accounts. These fake accounts are set up to artificially inflate certain posts, causing them to shoot up into Sina Weibos trending list, which are in turn displayed as the most popular topics to users.
A number of recent studies of information diffusion in social media, both empirical and theoretical, have been inspired by viral propagation models derived from epidemiology. These studies model the propagation of memes, i.e., pieces of information, between users in a social network similarly to the way diseases spread in human society. Importantly, one would expect a meme to spread in a social network amongst the people who are interested in the topic of that meme. Yet, the importance of topicality for information diffusion has been less explored in the literature. Here, we study empirical data about two different types of memes (hashtags and URLs) spreading through the Twitters online social network. For every meme, we infer its topics and for every user, we infer her topical interests. To analyze the impact of such topics on the propagation of memes, we introduce a novel theoretical framework of information diffusion. Our analysis identifies two distinct mechanisms, namely topical and non-topical, of information diffusion. The non-topical information diffusion resembles disease spreading as in simple contagion. In contrast, the topical information diffusion happens between users who are topically aligned with the information and has characteristics of complex contagion. Non-topical memes spread broadly among all users and end up being relatively popular. Topical memes spread narrowly among users who have interests topically aligned with them and are diffused more readily after multiple exposures. Our results show that the topicality of memes and users interests are essential for understanding and predicting information diffusion.
Core-periphery structure, the arrangement of a network into a dense core and sparse periphery, is a versatile descriptor of various social, biological, and technological networks. In practice, different core-periphery algorithms are often applied interchangeably, despite the fact that they can yield inconsistent descriptions of core-periphery structure. For example, two of the most widely used algorithms, the k-cores decomposition and the classic two-block model of Borgatti and Everett, extract fundamentally different structures: the latter partitions a network into a binary hub-and-spoke layout, while the former divides it into a layered hierarchy. We introduce a core-periphery typology to clarify these differences, along with Bayesian stochastic block modeling techniques to classify networks in accordance with this typology. Empirically, we find a rich diversity of core-periphery structure among networks. Through a detailed case study, we demonstrate the importance of acknowledging this diversity and situating networks within the core-periphery typology when conducting domain-specific analyses.
On social media platforms, like Twitter, users are often interested in gaining more influence and popularity by growing their set of followers, aka their audience. Several studies have described the properties of users on Twitter based on static snapshots of their follower network. Other studies have analyzed the general process of link formation. Here, rather than investigating the dynamics of this process itself, we study how the characteristics of the audience and follower links change as the audience of a user grows in size on the road to users popularity. To begin with, we find that the early followers tend to be more elite users than the late followers, i.e., they are more likely to have verified and expert accounts. Moreover, the early followers are significantly more similar to the person that they follow than the late followers. Namely, they are more likely to share time zone, language, and topics of interests with the followed user. To some extent, these phenomena are related with the growth of Twitter itself, wherein the early followers tend to be the early adopters of Twitter, while the late followers are late adopters. We isolate, however, the effect of the growth of audiences consisting of followers from the growth of Twitters user base itself. Finally, we measure the engagement of such audiences with the content of the followed user, by measuring the probability that an early or late follower becomes a retweeter.