ترغب بنشر مسار تعليمي؟ اضغط هنا

Predictive protocol of flocks with small-world connection pattern

37   0   0.0 ( 0 )
 نشر من قبل Michael Chen Mr.
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

By introducing a predictive mechanism with small-world connections, we propose a new motion protocol for self-driven flocks. The small-world connections are implemented by randomly adding long-range interactions from the leader to a few distant agents, namely pseudo-leaders. The leader can directly affect the pseudo-leaders, thereby influencing all the other agents through them efficiently. Moreover, these pseudo-leaders are able to predict the leaders motion several steps ahead and use this information in decision making towards coherent flocking with more stable formation. It is shown that drastic improvement can be achieved in terms of both the consensus performance and the communication cost. From the industrial engineering point of view, the current protocol allows for a significant improvement in the cohesion and rigidity of the formation at a fairly low cost of adding a few long-range links embedded with predictive capabilities. Significantly, this work uncovers an important feature of flocks that predictive capability and long-range links can compensate for the insufficiency of each other. These conclusions are valid for both the attractive/repulsive swarm model and the Vicsek model.

قيم البحث

اقرأ أيضاً

95 - Linyu Lin , Nam Dinh 2020
In nuclear engineering, modeling and simulations (M&Ss) are widely applied to support risk-informed safety analysis. Since nuclear safety analysis has important implications, a convincing validation process is needed to assess simulation adequacy, i. e., the degree to which M&S tools can adequately represent the system quantities of interest. However, due to data gaps, validation becomes a decision-making process under uncertainties. Expert knowledge and judgments are required to collect, choose, characterize, and integrate evidence toward the final adequacy decision. However, in validation frameworks CSAU: Code Scaling, Applicability, and Uncertainty (NUREG/CR-5249) and EMDAP: Evaluation Model Development and Assessment Process (RG 1.203), such a decision-making process is largely implicit and obscure. When scenarios are complex, knowledge biases and unreliable judgments can be overlooked, which could increase uncertainty in the simulation adequacy result and the corresponding risks. Therefore, a framework is required to formalize the decision-making process for simulation adequacy in a practical, transparent, and consistent manner. This paper suggests a framework Predictive Capability Maturity Quantification using Bayesian network (PCMQBN) as a quantified framework for assessing simulation adequacy based on information collected from validation activities. A case study is prepared for evaluating the adequacy of a Smoothed Particle Hydrodynamic simulation in predicting the hydrodynamic forces onto static structures during an external flooding scenario. Comparing to the qualitative and implicit adequacy assessment, PCMQBN is able to improve confidence in the simulation adequacy result and to reduce expected loss in the risk-informed safety analysis.
In this paper, we propose the SPR (sparse phase retrieval) method, which is a new phase retrieval method for coherent x-ray diffraction imaging (CXDI). Conventional phase retrieval methods effectively solve the problem for high signal-to-noise ratio measurements, but would not be sufficient for single biomolecular imaging which is expected to be realized with femto-second x-ray free electron laser pulses. The SPR method is based on the Bayesian statistics. It does not need to set the object boundary constraint that is required by the commonly used hybrid input-output (HIO) method, instead a prior distribution is defined with an exponential distribution and used for the estimation. Simulation results demonstrate that the proposed method reconstructs the electron density under a noisy condition even some central pixels are masked.
In the majority of molecular optimization tasks, predictive machine learning (ML) models are limited due to the unavailability and cost of generating big experimental datasets on the specific task. To circumvent this limitation, ML models are trained on big theoretical datasets or experimental indicators of molecular suitability that are either publicly available or inexpensive to acquire. These approaches produce a set of candidate molecules which have to be ranked using limited experimental data or expert knowledge. Under the assumption that structure is related to functionality, here we use a molecular fragment-based graphical autoencoder to generate unique structural fingerprints to efficiently search through the candidate set. We demonstrate that fragment-based graphical autoencoding reduces the error in predicting physical characteristics such as the solubility and partition coefficient in the small data regime compared to other extended circular fingerprints and string based approaches. We further demonstrate that this approach is capable of providing insight into real world molecular optimization problems, such as searching for stabilization additives in organic semiconductors by accurately predicting 92% of test molecules given 69 training examples. This task is a model example of black box molecular optimization as there is minimal theoretical and experimental knowledge to accurately predict the suitability of the additives.
188 - Peter Sunehag 2007
We define an entropy based on a chosen governing probability distribution. If a certain kind of measurements follow such a distribution it also gives us a suitable scale to study it with. This scale will appear as a link function that is applied to t he measurements. A link function can also be used to define an alternative structure on a set. We will see that generalized entropies are equivalent to using a different scale for the phenomenon that is studied compared to the scale the measurements arrive on. An extensive measurement scale is here a scale for which measurements fulfill a memoryless property. We conclude that the alternative algebraic structure defined by the link function must be used if we continue to work on the original scale. We derive Tsallis entropy by using a generalized log-logistic governing distribution. Typical applications of Tsallis entropy are related to phenomena with power-law behaviour.
Variants of fluctuation theorems recently discovered in the statistical mechanics of non-equilibrium processes may be used for the efficient determination of high-dimensional integrals as typically occurring in Bayesian data analysis. In particular f or multimodal distributions, Monte-Carlo procedures not relying on perfect equilibration are advantageous. We provide a comprehensive statistical error analysis for the determination of the prior-predictive value in a Bayes problem building on a variant of the Jarzynski equation. Special care is devoted to the characterization of the bias intrinsic to the method. We also discuss the determination of averages over multimodal posterior distributions with the help of a variant of the Crooks theorem. All our findings are verified by extensive numerical simulations of two model systems with bimodal likelihoods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا