Do you want to publish a course? Click here

Learning time-stepping by nonlinear dimensionality reduction to predict magnetization dynamics

84   0   0.0 ( 0 )
 Added by Lukas Exl
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

We establish a time-stepping learning algorithm and apply it to predict the solution of the partial differential equation of motion in micromagnetism as a dynamical system depending on the external field as parameter. The data-driven approach is based on nonlinear model order reduction by use of kernel methods for unsupervised learning, yielding a predictor for the magnetization dynamics without any need for field evaluations after a data generation and training phase as precomputation. Magnetization states from simulated micromagnetic dynamics associated with different external fields are used as training data to learn a low-dimensional representation in so-called feature space and a map that predicts the time-evolution in reduced space. Remarkably, only two degrees of freedom in feature space were enough to describe the nonlinear dynamics of a thin-film element. The approach has no restrictions on the spatial discretization and might be useful for fast determination of the response to an external field.



rate research

Read More

We establish a machine learning model for the prediction of the magnetization dynamics as function of the external field described by the Landau-Lifschitz-Gilbert equation, the partial differential equation of motion in micromagnetism. The model allows for fast and accurate determination of the response to an external field which is illustrated by a thin-film standard problem. The data-driven method internally reduces the dimensionality of the problem by means of nonlinear model reduction for unsupervised learning. This not only makes accurate prediction of the time steps possible, but also decisively reduces complexity in the learning process where magnetization states from simulated micromagnetic dynamics associated with different external fields are used as input data. We use a truncated representation of kernel principal components to describe the states between time predictions. The method is capable of handling large training sample sets owing to a low-rank approximation of the kernel matrix and an associated low-rank extension of kernel principal component analysis and kernel ridge regression. The approach entirely shifts computations into a reduced dimensional setting breaking down the problem dimension from the thousands to the tens.
Manifold learning-based encoders have been playing important roles in nonlinear dimensionality reduction (NLDR) for data exploration. However, existing methods can often fail to preserve geometric, topological and/or distributional structures of data. In this paper, we propose a deep manifold learning framework, called deep manifold transformation (DMT) for unsupervised NLDR and embedding learning. DMT enhances deep neural networks by using cross-layer local geometry-preserving (LGP) constraints. The LGP constraints constitute the loss for deep manifold learning and serve as geometric regularizers for NLDR network training. Extensive experiments on synthetic and real-world data demonstrate that DMT networks outperform existing leading manifold-based NLDR methods in terms of preserving the structures of data.
Machine learning (ML) entered the field of computational micromagnetics only recently. The main objective of these new approaches is the automatization of solutions of parameter-dependent problems in micromagnetism such as fast response curve estimation modeled by the Landau-Lifschitz-Gilbert (LLG) equation. Data-driven models for the solution of time- and parameter-dependent partial differential equations require high dimensional training data-structures. ML in this case is by no means a straight-forward trivial task, it needs algorithmic and mathematical innovation. Our work introduces theoretical and computational conceptions of certain kernel and neural network based dimensionality reduction approaches for efficient prediction of solutions via the notion of low-dimensional feature space integration. We introduce efficient treatment of kernel ridge regression and kernel principal component analysis via low-rank approximation. A second line follows neural network (NN) autoencoders as nonlinear data-dependent dimensional reduction for the training data with focus on accurate latent space variable description suitable for a feature space integration scheme. We verify and compare numerically by means of a NIST standard problem. The low-rank kernel method approach is fast and surprisingly accurate, while the NN scheme can even exceed this level of accuracy at the expense of significantly higher costs.
We consider the problem of the implementation of Stimulated Raman Adiabatic Passage (STIRAP) processes in degenerate systems, with a view to be able to steer the system wave function from an arbitrary initial superposition to an arbitrary target superposition. We examine the case a $N$-level atomic system consisting of $ N-1$ ground states coupled to a common excited state by laser pulses. We analyze the general case of initial and final superpositions belonging to the same manifold of states, and we cover also the case in which they are non-orthogonal. We demonstrate that, for a given initial and target superposition, it is always possible to choose the laser pulses so that in a transformed basis the system is reduced to an effective three-level $Lambda$ system, and standard STIRAP processes can be implemented. Our treatment leads to a simple strategy, with minimal computational complexity, which allows us to determine the laser pulses shape required for the wanted adiabatic steering.
We show unsupervised machine learning techniques are a valuable tool for both visualizing and computationally accelerating the estimation of galaxy physical properties from photometric data. As a proof of concept, we use self organizing maps (SOMs) to visualize a spectral energy distribution (SED) model library in the observed photometry space. The resulting visual maps allow for a better understanding of how the observed data maps to physical properties and to better optimize the model libraries for a given set of observational data. Next, the SOMs are used to estimate the physical parameters of 14,000 z~1 galaxies in the COSMOS field and found to be in agreement with those measured with SED fitting. However, the SOM method is able to estimate the full probability distribution functions for each galaxy up to about a million times faster than direct model fitting. We conclude by discussing how this speed up and learning how the galaxy data manifold maps to physical parameter space and visualizing this mapping in lower dimensions helps overcome other challenges in galaxy formation and evolution.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا