ﻻ يوجد ملخص باللغة العربية
Gradient networks can be used to model the dominant structure of complex networks. Previous works have focused on random gradient networks. Here we study gradient networks that minimize jamming on substrate networks with scale-free and ErdH{o}s-Renyi structure. We introduce structural correlations and strongly reduce congestion occurring on the network by using a Monte Carlo optimization scheme. This optimization alters the degree distribution and other structural properties of the resulting gradient networks. These results are expected to be relevant for transport and other dynamical processes in real network systems.
We investigate classic diffusion with the added feature that a diffusing particle is reset to its starting point each time the particle reaches a specified threshold. In an infinite domain, this process is non-stationary and its probability distribut
Thermal conductivities are routinely calculated in molecular dynamics simulations by keeping the boundaries at different temperatures and measuring the slope of the temperature profile in the bulk of the material, explicitly using Fouriers law of hea
It has been shown by several authors that a certain class of composite operators with many fields and gradients endangers the stability of nontrivial fixed points in 2+eps expansions for various models. This problem is so far unresolved. We investiga
In this work, we propose to employ information-geometric tools to optimize a graph neural network architecture such as the graph convolutional networks. More specifically, we develop optimization algorithms for the graph-based semi-supervised learnin
We combine the processes of resetting and first-passage to define emph{first-passage resetting}, where the resetting of a random walk to a fixed position is triggered by a first-passage event of the walk itself. In an infinite domain, first-passage r