Do you want to publish a course? Click here

Ensemble annealing of complex physical systems

117   0   0.0 ( 0 )
 Added by Michael Habeck
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Algorithms for simulating complex physical systems or solving difficult optimization problems often resort to an annealing process. Rather than simulating the system at the temperature of interest, an annealing algorithm starts at a temperature that is high enough to ensure ergodicity and gradually decreases it until the destination temperature is reached. This idea is used in popular algorithms such as parallel tempering and simulated annealing. A general problem with annealing methods is that they require a temperature schedule. Choosing well-balanced temperature schedules can be tedious and time-consuming. Imbalanced schedules can have a negative impact on the convergence, runtime and success of annealing algorithms. This article outlines a unifying framework, ensemble annealing, that combines ideas from simulated annealing, histogram reweighting and nested sampling with concepts in thermodynamic control. Ensemble annealing simultaneously simulates a physical system and estimates its density of states. The temperatures are lowered not according to a prefixed schedule but adaptively so as to maintain a constant relative entropy between successive ensembles. After each step on the temperature ladder an estimate of the density of states is updated and a new temperature is chosen. Ensemble annealing is highly practical and broadly applicable. This is illustrated for various systems including Ising, Potts, and protein models.



rate research

Read More

Many applications in computational science require computing the elements of a function of a large matrix. A commonly used approach is based on the the evaluation of the eigenvalue decomposition, a task that, in general, involves a computing time that scales with the cube of the size of the matrix. We present here a method that can be used to evaluate the elements of a function of a positive-definite matrix with a scaling that is linear for sparse matrices and quadratic in the general case. This methodology is based on the properties of the dynamics of a multidimensional harmonic potential coupled with colored noise generalized Langevin equation (GLE) thermostats. This $f-$thermostat (FTH) approach allows us to calculate directly elements of functions of a positive-definite matrix by carefully tailoring the properties of the stochastic dynamics. We demonstrate the scaling and the accuracy of this approach for both dense and sparse problems and compare the results with other established methodologies.
53 - Marco Winkler 2015
In the course of the growth of the Internet and due to increasing availability of data, over the last two decades, the field of network science has established itself as an own area of research. With quantitative scientists from computer science, mathematics, and physics working on datasets from biology, economics, sociology, political sciences, and many others, network science serves as a paradigm for interdisciplinary research. One of the major goals in network science is to unravel the relationship between topological graph structure and a networks function. As evidence suggests, systems from the same fields, i.e. with similar function, tend to exhibit similar structure. However, it is still vague whether a similar graph structure automatically implies likewise function. This dissertation aims at helping to bridge this gap, while particularly focusing on the role of triadic structures. After a general introduction to the main concepts of network science, existing work devoted to the relevance of triadic substructures is reviewed. A major challenge in modeling such structure is the fact that not all three-node subgraphs can be specified independently of each other, as pairs of nodes may participate in multiple triadic subgraphs. In order to overcome this obstacle, a novel class of generative network models based on pair-disjoint triadic building blocks is suggested. It is further investigated whether triad motifs - subgraph patterns which appear significantly more frequently than expected at random - occur homogeneously or heterogeneously distributed over graphs. Finally, the influence of triadic substructure on the evolution of dynamical processes acting on their nodes is studied. It is observed that certain motifs impose clear signatures on the systems dynamics, even when embedded in a larger network structure.
We survey the application of a relatively new branch of statistical physics--community detection-- to data mining. In particular, we focus on the diagnosis of materials and automated image segmentation. Community detection describes the quest of partitioning a complex system involving many elements into optimally decoupled subsets or communities of such elements. We review a multiresolution variant which is used to ascertain structures at different spatial and temporal scales. Significant patterns are obtained by examining the correlations between different independent solvers. Similar to other combinatorial optimization problems in the NP complexity class, community detection exhibits several phases. Typically, illuminating orders are revealed by choosing parameters that lead to extremal information theory correlations.
The parallel annealing method is one of the promising approaches for large scale simulations as potentially scalable on any parallel architecture. We present an implementation of the algorithm on the hybrid program architecture combining CUDA and MPI. The problem is to keep all general-purpose graphics processing unit devices as busy as possible redistributing replicas and to do that efficiently. We provide details of the testing on Intel Skylake/Nvidia V100 based hardware running in parallel more than two million replicas of the Ising model sample. The results are quite optimistic because the acceleration grows toward the perfect line with the growing complexity of the simulated system.
The goal of response theory, in each of its many statistical mechanical formulations, is to predict the perturbed response of a system from the knowledge of the unperturbed state and of the applied perturbation. A new recent angle on the problem focuses on providing a method to perform predictions of the change in one observable of the system by using the change in a second observable as a surrogate for the actual forcing. Such a viewpoint tries to address the very relevant problem of causal links within complex system when only incomplete information is available. We present here a method for quantifying and ranking the predictive ability of observables and use it to investigate the response of a paradigmatic spatially extended system, the Lorenz 96 model. We perturb locally the system and we then study to what extent a given local observable can predict the behaviour of a separate local observable. We show that this approach can reveal insights on the way a signal propagates inside the system. We also show that the procedure becomes more efficient if one considers multiple acting forcings and, correspondingly, multiple observables as predictors of the observable of interest.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا