No Arabic abstract
Sharing economy platforms have become extremely popular in the last few years, and they have changed the way in which we commute, travel, and borrow among many other activities. Despite their popularity among consumers, such companies are poorly regulated. For example, Airbnb, one of the most successful examples of sharing economy platform, is often criticized by regulators and policy makers. While, in theory, municipalities should regulate the emergence of Airbnb through evidence-based policy making, in practice, they engage in a false dichotomy: some municipalities allow the business without imposing any regulation, while others ban it altogether. That is because there is no evidence upon which to draft policies. Here we propose to gather evidence from the Web. After crawling Airbnb data for the entire city of London, we find out where and when Airbnb listings are offered and, by matching such listing information with census and hotel data, we determine the socio-economic conditions of the areas that actually benefit from the hospitality platform. The reality is more nuanced than one would expect, and it has changed over the years. Airbnb demand and offering have changed over time, and traditional regulations have not been able to respond to those changes. That is why, finally, we rely on our data analysis to envision regulations that are responsive to real-time demands, contributing to the emerging idea of algorithmic regulation.
Online government petitions represent a new data-rich mode of political participation. This work examines the thus far understudied dynamics of sharing petitions on social media in order to garner signatures and, ultimately, a government response. Using 20 months of Twitter data comprising over 1 million tweets linking to a petition, we perform analyses of networks constructed of petitions and supporters on Twitter, revealing implicit social dynamics therein. We find that Twitter users do not exclusively share petitions on one issue nor do they share exclusively popular petitions. Among the over 240,000 Twitter users, we find latent support groups, with the most central users primarily being politically active average individuals. Twitter as a platform for sharing government petitions, thus, appears to hold potential to foster the creation of and coordination among a new form of latent support interest groups online.
Recent evidence has emerged linking coordinated campaigns by state-sponsored actors to manipulate public opinion on the Web. Campaigns revolving around major political events are enacted via mission-focused trolls. While trolls are involved in spreading disinformation on social media, there is little understanding of how they operate, what type of content they disseminate, how their strategies evolve over time, and how they influence the Webs information ecosystem. In this paper, we begin to address this gap by analyzing 10M posts by 5.5K Twitter and Reddit users identified as Russian and Iranian state-sponsored trolls. We compare the behavior of each group of state-sponsored trolls with a focus on how their strategies change over time, the different campaigns they embark on, and differences between the trolls operated by Russia and Iran. Among other things, we find: 1) that Russian trolls were pro-Trump while Iranian trolls were anti-Trump; 2) evidence that campaigns undertaken by such actors are influenced by real-world events; and 3) that the behavior of such actors is not consistent over time, hence automated detection is not a straightforward task. Using the Hawkes Processes statistical model, we quantify the influence these accounts have on pushing URLs on four social platforms: Twitter, Reddit, 4chans Politically Incorrect board (/pol/), and Gab. In general, Russian trolls were more influential and efficient in pushing URLs to all the other platforms with the exception of /pol/ where Iranians were more influential. Finally, we release our data and source code to ensure the reproducibility of our results and to encourage other researchers to work on understanding other emerging kinds of state-sponsored troll accounts on Twitter.
We compare complex networks built from the game of go and obtained from databases of human-played games with those obtained from computer-played games. Our investigations show that statistical features of the human-based networks and the computer-based networks differ, and that these differences can be statistically significant on a relatively small number of games using specific estimators. We show that the deterministic or stochastic nature of the computer algorithm playing the game can also be distinguished from these quantities. This can be seen as tool to implement a Turing-like test for go simulators.
Wikipedia is a huge global repository of human knowledge, that can be leveraged to investigate interwinements between cultures. With this aim, we apply methods of Markov chains and Google matrix, for the analysis of the hyperlink networks of 24 Wikipedia language editions, and rank all their articles by PageRank, 2DRank and CheiRank algorithms. Using automatic extraction of people names, we obtain the top 100 historical figures, for each edition and for each algorithm. We investigate their spatial, temporal, and gender distributions in dependence of their cultural origins. Our study demonstrates not only the existence of skewness with local figures, mainly recognized only in their own cultures, but also the existence of global historical figures appearing in a large number of editions. By determining the birth time and place of these persons, we perform an analysis of the evolution of such figures through 35 centuries of human history for each language, thus recovering interactions and entanglement of cultures over time. We also obtain the distributions of historical figures over world countries, highlighting geographical aspects of cross-cultural links. Considering historical figures who appear in multiple editions as interactions between cultures, we construct a network of cultures and identify the most influential cultures according to this network.
This study provides a large-scale mapping of the French media space using digital methods to estimate political polarization and to study information circuits. We collect data about the production and circulation of online news stories in France over the course of one year, adopting a multi-layer perspective on the media ecosystem. We source our data from websites, Twitter and Facebook. We also identify a certain number of important structural features. A stochastic block model of the hyperlinks structure shows the systematic rejection of counter-informational press in a separate cluster which hardly receives any attention from the mainstream media. Counter-informational sub-spaces are also peripheral on the consumption side. We measure their respective audiences on Twitter and Facebook and do not observe a large discrepancy between both social networks, with counter-information space, far right and far left media gathering limited audiences. Finally, we also measure the ideological distribution of news stories using Twitter data, which also suggests that the French media landscape is quite balanced. We therefore conclude that the French media ecosystem does not suffer from the same level of polarization as the US media ecosystem. The comparison with the American situation also allows us to consolidate a result from studies on disinformation: the polarization of the journalistic space and the circulation of fake news are phenomena that only become more widespread when dominant and influential actors in the political or journalistic space spread topics and dubious content originally circulating in the fringe of the information space.