ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimization in Gradient Networks

264   0   0.0 ( 0 )
 نشر من قبل Natali Gulbahce
 تاريخ النشر 2007
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Natali Gulbahce




اسأل ChatGPT حول البحث

Gradient networks can be used to model the dominant structure of complex networks. Previous works have focused on random gradient networks. Here we study gradient networks that minimize jamming on substrate networks with scale-free and ErdH{o}s-Renyi structure. We introduce structural correlations and strongly reduce congestion occurring on the network by using a Monte Carlo optimization scheme. This optimization alters the degree distribution and other structural properties of the resulting gradient networks. These results are expected to be relevant for transport and other dynamical processes in real network systems.

قيم البحث

اقرأ أيضاً

We investigate classic diffusion with the added feature that a diffusing particle is reset to its starting point each time the particle reaches a specified threshold. In an infinite domain, this process is non-stationary and its probability distribut ion exhibits rich features. In a finite domain, we define a non-trivial optimization in which a cost is incurred whenever the particle is reset and a reward is obtained while the particle stays near the reset point. We derive the condition to optimize the net gain in this system, namely, the reward minus the cost.
Thermal conductivities are routinely calculated in molecular dynamics simulations by keeping the boundaries at different temperatures and measuring the slope of the temperature profile in the bulk of the material, explicitly using Fouriers law of hea t conduction. Substantiated by the observation of a distinct linear profile at the center of the material, this approach has also been frequently used in superdiffusive materials, such as nanotubes or polymer chains, which do not satisfy Fouriers law at the system sizes considered. It has been recently argued that this temperature gradient procedure yields worse results when compared with a method based on the temperature difference at the boundaries -- thus taking into account the regions near the boundaries where the temperature profile is not linear. We study a realistic example, nanocomposites formed by adding boron nitride nanotubes to a polymer matrix of amorphous polyethylene, to show that in superdiffusive materials, despite the appearance of a central region with a linear profile, the temperature gradient method is actually inconsistent with a conductivity that depends on the system size, and, thus, it should be only used in normal diffusive systems.
It has been shown by several authors that a certain class of composite operators with many fields and gradients endangers the stability of nontrivial fixed points in 2+eps expansions for various models. This problem is so far unresolved. We investiga te it in the N-vector model in an 1/N-expansion. By establishing an asymptotic naive addition law for anomalous dimensions we demonstrate that the first orders in the 2+eps expansion can lead to erroneous interpretations for high--gradient operators. While this makes us cautious against over--interpreting such expansions (either 2+eps or 1/N), the stability problem in the N-vector model persists also in first order in 1/N below three dimensions.
In this work, we propose to employ information-geometric tools to optimize a graph neural network architecture such as the graph convolutional networks. More specifically, we develop optimization algorithms for the graph-based semi-supervised learnin g by employing the natural gradient information in the optimization process. This allows us to efficiently exploit the geometry of the underlying statistical model or parameter space for optimization and inference. To the best of our knowledge, this is the first work that has utilized the natural gradient for the optimization of graph neural networks that can be extended to other semi-supervised problems. Efficient computations algorithms are developed and extensive numerical studies are conducted to demonstrate the superior performance of our algorithms over existing algorithms such as ADAM and SGD.
We combine the processes of resetting and first-passage to define emph{first-passage resetting}, where the resetting of a random walk to a fixed position is triggered by a first-passage event of the walk itself. In an infinite domain, first-passage r esetting of isotropic diffusion is non-stationary, with the number of resetting events growing with time as $sqrt{t}$. We calculate the resulting spatial probability distribution of the particle analytically, and also obtain this distribution by a geometric path decomposition. In a finite interval, we define an optimization problem that is controlled by first-passage resetting; this scenario is motivated by reliability theory. The goal is to operate a system close to its maximum capacity without experiencing too many breakdowns. However, when a breakdown occurs the system is reset to its minimal operating point. We define and optimize an objective function that maximizes the reward (being close to maximum operation) minus a penalty for each breakdown. We also investigate extensions of this basic model to include delay after each reset and to two dimensions. Finally, we study the growth dynamics of a domain in which the domain boundary recedes by a specified amount whenever the diffusing particle reaches the boundary after which a resetting event occurs. We determine the growth rate of the domain for the semi-infinite line and the finite interval and find a wide range of behaviors that depend on how much the recession occurs when the particle hits the boundary.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا