ﻻ يوجد ملخص باللغة العربية
A multilayer network depicts different types of interactions among the same set of nodes. For example, protease networks consist of five to seven layers, where different layers represent distinct types of experimentally confirmed molecule interactions among proteins. In a multilayer protease network, the co-expression layer is obtained through the meta-analysis of transcriptomic data from various sources and platforms. While in some researches the co-expression layer is in turn represented as a multilayered network, a fundamental problem is how to obtain a single-layer network from the corresponding multilayered network. This process is called multilayer network aggregation. In this work, we propose a maximum a posteriori estimation-based algorithm for multilayer network aggregation. The method allows to aggregate a weighted multilayer network while conserving the core information of the layers. We evaluate the method through an unweighted friendship network and a multilayer gene co-expression network. We compare the aggregated gene co-expression network with a network obtained from conflated datasets and a network obtained from averaged weights. The Von Neumann entropy is adopted to compare the mixedness of the three networks, and, together with other network measurements, shows the effectiveness of the proposes method.
People change their physical contacts as a preventive response to infectious disease propagations. Yet, only a few mathematical models consider the coupled dynamics of the disease propagation and the contact adaptation process. This paper presents a
In recent years, network embedding methods have garnered increasing attention because of their effectiveness in various information retrieval tasks. The goal is to learn low-dimensional representations of vertexes in an information network and simult
Nuclear reaction rate ($lambda$) is a significant factor in the process of nucleosynthesis. A multi-layer directed-weighted nuclear reaction network in which the reaction rate as the weight, and neutron, proton, $^4$He and the remainder nuclei as the
Normalization is known to help the optimization of deep neural networks. Curiously, different architectures require specialized normalization methods. In this paper, we study what normalization is effective for Graph Neural Networks (GNNs). First, we
How might one test the hypothesis that graphs were sampled from the same distribution? Here, we compare two statistical tests that address this question. The first uses the observed subgraph densities themselves as estimates of those of the underlyin