No Arabic abstract
We explore the derivation of distributed parameter system evolution laws (and in particular, partial differential operators and associated partial differential equations, PDEs) from spatiotemporal data. This is, of course, a classical identification problem; our focus here is on the use of manifold learning techniques (and, in particular, variations of Diffusion Maps) in conjunction with neural network learning algorithms that allow us to attempt this task when the dependent variables, and even the independent variables of the PDE are not known a priori and must be themselves derived from the data. The similarity measure used in Diffusion Maps for dependent coarse variable detection involves distances between local particle distribution observations; for independent variable detection we use distances between local short-time dynamics. We demonstrate each approach through an illustrative established PDE example. Such variable-free, emergent space identification algorithms connect naturally with equation-free multiscale computation tools.
Modeling a high-dimensional Hamiltonian system in reduced dimensions with respect to coarse-grained (CG) variables can greatly reduce computational cost and enable efficient bottom-up prediction of main features of the system for many applications. However, it usually experiences significantly altered dynamics due to loss of degrees of freedom upon coarse-graining. To establish CG models that can faithfully preserve dynamics, previous efforts mainly focused on equilibrium systems. In contrast, various soft matter systems are known out of equilibrium. Therefore, the present work concerns non-equilibrium systems and enables accurate and efficient CG modeling that preserves non-equilibrium dynamics and is generally applicable to any non-equilibrium process and any observable of interest. To this end, the dynamic equation of a CG variable is built in the form of the non-stationary generalized Langevin equation (nsGLE) to account for the dependence of non-equilibrium processes on the initial conditions, where the two-time memory kernel is determined from the data of the two-time auto-correlation function of the non-equilibrium trajectory-averaged observable of interest. By embedding the non-stationary non-Markovian process in an extended stochastic framework, an explicit form of the non-stationary random noise in the nsGLE is introduced, and the cost is significantly reduced for solving the nsGLE to predict the non-equilibrium dynamics of the CG variable. To prove and exploit the equivalence of the nsGLE and extended dynamics, the memory kernel is parameterized in a two-time exponential expansion. A data-driven hybrid optimization process is proposed for the parameterization, a non-convex and high-dimensional optimization problem.
The purpose of physics is to describe nature from elementary particles all the way up to cosmological objects like cluster of galaxies and black holes. Although a unified description for all this spectrum of events is desirable, this would be highly impractical. To not get lost in unnecessary details, effective descriptions are mandatory. Here we analyze the dynamics that may emerge from a full quantum description when one does not have access to all the degrees of freedom of a system. More concretely, we describe the properties of the dynamics that arise from quantum mechanics if one has access only to a coarse-grained description of the system. We obtain that the effective maps are not necessarily of Kraus form, due to correlations between accessible and nonaccessible degrees of freedom, and that the distance between two effective states may increase under the action of the effective map. We expect our framework to be useful for addressing questions such as the thermalization of closed quantum systems, as well as the description of measurements in quantum mechanics.
Discretization of phase space usually nullifies chaos in dynamical systems. We show that if randomness is associated with discretization dynamical chaos may survive and be indistinguishable from that of the original chaotic system, when an entropic, coarse-grained analysis is performed. Relevance of this phenomenon to the problem of quantum chaos is discussed.
During the last decade coarse-grained nucleotide models have emerged that allow us to DNA and RNA on unprecedented time and length scales. Among them is oxDNA, a coarse-grained, sequence-specific model that captures the hybridisation transition of DNA and many structural properties of single- and double-stranded DNA. oxDNA was previously only available as standalone software, but has now been implemented into the popular LAMMPS molecular dynamics code. This article describes the new implementation and analyses its parallel performance. Practical applications are presented that focus on single-stranded DNA, an area of research which has been so far under-investigated. The LAMMPS implementation of oxDNA lowers the entry barrier for using the oxDNA model significantly, facilitates future code development and interfacing with existing LAMMPS functionality as well as other coarse-grained and atomistic DNA models.
Under any Multiclass Classification (MCC) setting defined by a collection of labeled point-cloud specified by a feature-set, we extract only stochastic partial orderings from all possible triplets of point-cloud without explicitly measuring the three cloud-to-cloud distances. We demonstrate that such a collective of partial ordering can efficiently compute a label embedding tree geometry on the Label-space. This tree in turn gives rise to a predictive graph, or a network with precisely weighted linkages. Such two multiscale geometries are taken as the coarse scale information content of MCC. They indeed jointly shed lights on explainable knowledge on why and how labeling comes about and facilitates error-free prediction with potential multiple candidate labels supported by data. For revealing within-label heterogeneity, we further undergo labeling naturally found clusters within each point-cloud, and likewise derive multiscale geometry as its fine-scale information content contained in data. This fine-scale endeavor shows that our computational proposal is indeed scalable to a MCC setting having a large label-space. Overall the computed multiscale collective of data-driven patterns and knowledge will serve as a basis for constructing visible and explainable subject matter intelligence regarding the system of interest.