ترغب بنشر مسار تعليمي؟ اضغط هنا

The Covid-19 pandemic has had a deep impact on the lives of the entire world population, inducing a participated societal debate. As in other contexts, the debate has been the subject of several d/misinformation campaigns; in a quite unprecedented fa shion, however, the presence of false information has seriously put at risk the public health. In this sense, detecting the presence of malicious narratives and identifying the kinds of users that are more prone to spread them represent the first step to limit the persistence of the former ones. In the present paper we analyse the semantic network observed on Twitter during the first Italian lockdown (induced by the hashtags contained in approximately 1.5 millions tweets published between the 23rd of March 2020 and the 23rd of April 2020) and study the extent to which various discursive communities are exposed to d/misinformation arguments. As observed in other studies, the recovered discursive communities largely overlap with traditional political parties, even if the debated topics concern different facets of the management of the pandemic. Although the themes directly related to d/misinformation are a minority of those discussed within our semantic networks, their popularity is unevenly distributed among the various discursive communities.
Modern statistical modeling is an important complement to the more traditional approach of physics where Complex Systems are studied by means of extremely simple idealized models. The Minimum Description Length (MDL) is a principled approach to stati stical modeling combining Occams razor with Information Theory for the selection of models providing the most concise descriptions. In this work, we introduce the Boltzmannian MDL (BMDL), a formalization of the principle of MDL with a parametric complexity conveniently formulated as the free-energy of an artificial thermodynamic system. In this way, we leverage on the rich theoretical and technical background of statistical mechanics, to show the crucial importance that phase transitions and other thermodynamic concepts have on the problem of statistical modeling from an information theoretic point of view. For example, we provide information theoretic justifications of why a high-temperature series expansion can be used to compute systematic approximations of the BMDL when the formalism is used to model data, and why statistically significant model selections can be identified with ordered phases when the BMDL is used to model models. To test the introduced formalism, we compute approximations of BMDL for the problem of community detection in complex networks, where we obtain a principled MDL derivation of the Girvan-Newman (GN) modularity and the Zhang-Moore (ZM) community detection method. Here, by means of analytical estimations and numerical experiments on synthetic and empirical networks, we find that BMDL-based correction terms of the GN modularity improve the quality of the detected communities and we also find an information theoretic justification of why the ZM criterion for estimation of the number of network communities is better than alternative approaches such as the bare minimization of a free energy.
Interbank markets are often characterised in terms of a core-periphery network structure, with a highly interconnected core of banks holding the market together, and a periphery of banks connected mostly to the core but not internally. This paradigm has recently been challenged for short time scales, where interbank markets seem better characterised by a bipartite structure with more core-periphery connections than inside the core. Using a novel core-periphery detection method on the eMID interbank market, we enrich this picture by showing that the network is actually characterised by multiple core-periphery pairs. Moreover, a transition from core-periphery to bipartite structures occurs by shortening the temporal scale of data aggregation. We further show how the global financial crisis transformed the market, in terms of composition, multiplicity and internal organisation of core-periphery pairs. By unveiling such a fine-grained organisation and transformation of the interbank market, our method can find important applications in the understanding of how distress can propagate over financial networks.
Network theory proved recently to be useful in the quantification of many properties of financial systems. The analysis of the structure of investment portfolios is a major application since their eventual correlation and overlap impact the actual ri sk diversification by individual investors. We investigate the bipartite network of US mutual fund portfolios and their assets. We follow its evolution during the Global Financial Crisis and analyse the interplay between diversification, as understood in classical portfolio theory, and similarity of the investments of different funds. We show that, on average, portfolios have become more diversified and less similar during the crisis. However, we also find that large overlap is far more likely than expected from models of random allocation of investments. This indicates the existence of strong correlations between fund portfolio strategies. We introduce a simplified model of propagation of financial shocks, that we exploit to show that a systemic risk component origins from the similarity of portfolios. The network is still vulnerable after crisis because of this effect, despite the increase in the diversification of portfolios. Our results indicate that diversification may even increase systemic risk when funds diversify in the same way. Diversification and similarity can play antagonistic roles and the trade-off between the two should be taken into account to properly assess systemic risk.
Information is a valuable asset for agents in socio-economic systems, a significant part of the information being entailed into the very network of connections between agents. The different interlinkages patterns that agents establish may, in fact, l ead to asymmetries in the knowledge of the network structure; since this entails a different ability of quantifying relevant systemic properties (e.g. the risk of financial contagion in a network of liabilities), agents capable of providing a better estimate of (otherwise) unaccessible network properties, ultimately have a competitive advantage. In this paper, we address for the first time the issue of quantifying the information asymmetry arising from the network topology. To this aim, we define a novel index - InfoRank - intended to measure the quality of the information possessed by each node, computing the Shannon entropy of the ensemble conditioned on the node-specific information. Further, we test the performance of our novel ranking procedure in terms of the reconstruction accuracy of the (unaccessible) network structure and show that it outperforms other popular centrality measures in identifying the most informative nodes. Finally, we discuss the socio-economic implications of network information asymmetry.
Nowadays users get informed and shape their opinion through social media. However, the disintermediated access to contents does not guarantee quality of information. Selective exposure and confirmation bias, indeed, have been shown to play a pivotal role in content consumption and information spreading. Users tend to select information adhering (and reinforcing) their worldview and to ignore dissenting information. This pattern elicits the formation of polarized groups -- i.e., echo chambers -- where the interaction with like-minded people might even reinforce polarization. In this work we address news consumption around Brexit in UK on Facebook. In particular, we perform a massive analysis on more than 1 Million users interacting with Brexit related posts from the main news providers between January and July 2016. We show that consumption patterns elicit the emergence of two distinct communities of news outlets. Furthermore, to better characterize inner group dynamics, we introduce a new technique which combines automatic topic extraction and sentiment analysis. We compare how the same topics are presented on posts and the related emotional response on comments finding significant differences in both echo chambers and that polarization influences the perception of topics. Our results provide important insights about the determinants of polarization and evolution of core narratives on online debating.
The increasing attention to environmental issues is forcing the implementation of novel energy models based on renewable sources, fundamentally changing the configuration of energy management and introducing new criticalities that are only partly und erstood. In particular, renewable energies introduce fluctuations causing an increased request of conventional energy sources oriented to balance energy requests on short notices. In order to develop an effective usage of low-carbon sources, such fluctuations must be understood and tamed. In this paper we present a microscopic model for the description and the forecast of short time fluctuations related to renewable sources and to their effects on the electricity market. To account for the inter-dependencies among the energy market and the physical power dispatch network, we use a statistical mechanics approach to sample stochastic perturbations on the power system and an agent based approach for the prediction of the market players behavior. Our model is a data-driven; it builds on one day ahead real market transactions to train agents behaviour and allows to infer the market share of different energy sources. We benchmark our approach on the Italian market finding a good accordance with real data.
The spreading of unsubstantiated rumors on online social networks (OSN) either unintentionally or intentionally (e.g., for political reasons or even trolling) can have serious consequences such as in the recent case of rumors about Ebola causing disr uption to health-care workers. Here we show that indicators aimed at quantifying information consumption patterns might provide important insights about the virality of false claims. In particular, we address the driving forces behind the popularity of contents by analyzing a sample of 1.2M Facebook Italian users consuming different (and opposite) types of information (science and conspiracy news). We show that users engagement across different contents correlates with the number of friends having similar consumption patterns (homophily), indicating the area in the social network where certain types of contents are more likely to spread. Then, we test diffusion patterns on an external sample of $4,709$ intentional satirical false claims showing that neither the presence of hubs (structural properties) nor the most active users (influencers) are prevalent in viral phenomena. Instead, we found out that in an environment where misinformation is pervasive, users aggregation around shared beliefs may make the usual exposure to conspiracy stories (polarization) a determinant for the virality of false information.
We introduce the concept of self-healing in the field of complex networks. Obvious applications range from infrastructural to technological networks. By exploiting the presence of redundant links in recovering the connectivity of the system, we intro duce self-healing capabilities through the application of distributed communication protocols granting the smartness of the system. We analyze the interplay between redundancies and smart reconfiguration protocols in improving the resilience of networked infrastructures to multiple failures; in particular, we measure the fraction of nodes still served for increasing levels of network damages. We study the effects of different connectivity patterns (planar square-grids, small-world, scale-free networks) on the healing performances. The study of small-world topologies shows us that the introduction of some long-range connections in the planar grids greatly enhances the resilience to multiple failures giving results comparable to the most resilient (but less realistic) scale-free structures.
We present a novel method to reconstruct complex network from partial information. We assume to know the links only for a subset of the nodes and to know some non-topological quantity (fitness) characterising every node. The missing links are generat ed on the basis of the latter quan- tity according to a fitness model calibrated on the subset of nodes for which links are known. We measure the quality of the reconstruction of several topological properties, such as the network density and the degree distri- bution as a function of the size of the initial subset of nodes. Moreover, we also study the resilience of the network to distress propagation. We first test the method on ensembles of synthetic networks generated with the Exponential Random Graph model which allows to apply common tools from statistical mechanics. We then test it on the empirical case of the World Trade Web. In both cases, we find that a subset of 10 % of nodes is enough to reconstruct the main features of the network along with its resilience with an error of 5%.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا