ترغب بنشر مسار تعليمي؟ اضغط هنا

Evolutionary Construction of Geographical Networks with Nearly Optimal Robustness and Efficient Routing Properties

55   0   0.0 ( 0 )
 نشر من قبل Yukio Hayashi
 تاريخ النشر 2011
والبحث باللغة English
 تأليف Yukio Hayashi




اسأل ChatGPT حول البحث

Robust and efficient design of networks on a realistic geographical space is one of the important issues for the realization of dependable communication systems. In this paper, based on a percolation theory and a geometric graph property, we investigate such a design from the following viewpoints: 1) network evolution according to a spatially heterogeneous population, 2) trimodal low degrees for the tolerant connectivity against both failures and attacks, and 3) decentralized routing within short paths. Furthermore, we point out the weakened tolerance by geographical constraints on local cycles, and propose a practical strategy by adding a small fraction of shortcut links between randomly chosen nodes in order to improve the robustness to a similar level to that of the optimal bimodal networks with a larger degree $O(sqrt{N})$ for the network size $N$. These properties will be useful for constructing future ad-hoc networks in wide-area communications.

قيم البحث

اقرأ أيضاً

In this article, we propose a growing network model based on an optimal policy involving both topological and geographical measures. In this model, at each time step, a new node, having randomly assigned coordinates in a $1 times 1$ square, is added and connected to a previously existing node $i$, which minimizes the quantity $r_i^2/k_i^alpha$, where $r_i$ is the geographical distance, $k_i$ the degree, and $alpha$ a free parameter. The degree distribution obeys a power-law form when $alpha=1$, and an exponential form when $alpha=0$. When $alpha$ is in the interval $(0,1)$, the network exhibits a stretched exponential distribution. We prove that the average topological distance increases in a logarithmic scale of the network size, indicating the existence of the small-world property. Furthermore, we obtain the geographical edge-length distribution, the total geographical length of all edges, and the average geographical distance of the whole network. Interestingly, we found that the total edge-length will sharply increase when $alpha$ exceeds the critical value $alpha_c=1$, and the average geographical distance has an upper bound independent of the network size. All the results are obtained analytically with some reasonable approximations, which are well verified by simulations.
The subjects of the paper are the likelihood method (LM) and the expected Fisher information (FI) considered from the point od view of the construction of the physical models which originate in the statistical description of phenomena. The master equ ation case and structural information principle are derived. Then, the phenomenological description of the information transfer is presented. The extreme physical information (EPI) method is reviewed. As if marginal, the statistical interpretation of the amplitude of the system is given. The formalism developed in this paper would be also applied in quantum information processing and quantum game theory.
In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a comprehensive particle-level view of the event by combining information from the calorimeters and the trackers, significantly improving the detector resol ution for jets and the missing transverse momentum. In view of the planned high-luminosity upgrade of the CERN Large Hadron Collider (LHC), it is necessary to revisit existing reconstruction algorithms and ensure that both the physics and computational performance are sufficient in an environment with many simultaneous proton-proton interactions (pileup). Machine learning may offer a prospect for computationally efficient event reconstruction that is well-suited to heterogeneous computing platforms, while significantly improving the reconstruction quality over rule-based algorithms for granular detectors. We introduce MLPF, a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, computationally efficient, and scalable graph neural networks optimized using a multi-task objective on simulated events. We report the physics and computational performance of the MLPF algorithm on a Monte Carlo dataset of top quark-antiquark pairs produced in proton-proton collisions in conditions similar to those expected for the high-luminosity LHC. The MLPF algorithm improves the physics response with respect to a rule-based benchmark algorithm and demonstrates computationally scalable particle-flow reconstruction in a high-pileup environment.
114 - B. Alpert , E. Ferri , D. Bennett 2015
For experiments with high arrival rates, reliable identification of nearly-coincident events can be crucial. For calorimetric measurements to directly measure the neutrino mass such as HOLMES, unidentified pulse pile-ups are expected to be a leading source of experimental error. Although Wiener filtering can be used to recognize pile-up, it suffers errors due to pulse-shape variation from detector nonlinearity, readout dependence on sub-sample arrival times, and stability issues from the ill-posed deconvolution problem of recovering Dirac delta-functions from smooth data. Due to these factors, we have developed a processing method that exploits singular value decomposition to (1) separate single-pulse records from piled-up records in training data and (2) construct a model of single-pulse records that accounts for varying pulse shape with amplitude, arrival time, and baseline level, suitable for detecting nearly-coincident events. We show that the resulting processing advances can reduce the required performance specifications of the detectors and readout system or, equivalently, enable larger sensor arrays and better constraints on the neutrino mass.
79 - F. Sattin 2017
Time series of observables measured from complex systems do often exhibit non-normal statistics, their statistical distributions (PDFs) are not gaussian and often skewed, with roughly exponential tails. Departure from gaussianity is related to the in termittent development of large-scale coherent structures. The existence of these structures is rooted into the nonlinear dynamical equations obeyed by each system, therefore it is expected that some prior knowledge or guessing of these equations is needed if one wishes to infer the corresponding PDF; conversely, the empirical knowledge of the PDF does provide information about the underlying dynamics. In this work we suggest that it is not always necessary. We show how, under some assumptions, a formal evolution equation for the PDF $p(x)$ can be written down, corresponding to the progressive accumulation of measurements of the generic observable $x$. The limiting solution to this equation is computed analytically, and shown to interpolate between some of the most common distributions, Gamma, Beta and Gaussian PDFs. The control parameter is just the ratio between the rms of the fluctuations and the range of allowed values. Thus, no information about the dynamics is required.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا