Do you want to publish a course? Click here

Prediction of magnetization dynamics in a reduced dimensional feature space setting utilizing a low-rank kernel method

85   0   0.0 ( 0 )
 Added by Lukas Exl
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

We establish a machine learning model for the prediction of the magnetization dynamics as function of the external field described by the Landau-Lifschitz-Gilbert equation, the partial differential equation of motion in micromagnetism. The model allows for fast and accurate determination of the response to an external field which is illustrated by a thin-film standard problem. The data-driven method internally reduces the dimensionality of the problem by means of nonlinear model reduction for unsupervised learning. This not only makes accurate prediction of the time steps possible, but also decisively reduces complexity in the learning process where magnetization states from simulated micromagnetic dynamics associated with different external fields are used as input data. We use a truncated representation of kernel principal components to describe the states between time predictions. The method is capable of handling large training sample sets owing to a low-rank approximation of the kernel matrix and an associated low-rank extension of kernel principal component analysis and kernel ridge regression. The approach entirely shifts computations into a reduced dimensional setting breaking down the problem dimension from the thousands to the tens.

rate research

Read More

Eficient, physically-inspired descriptors of the structure and composition of molecules and materials play a key role in the application of machine-learning techniques to atomistic simulations. The proliferation of approaches, as well as the fact that each choice of features can lead to very different behavior depending on how they are used, e.g. by introducing non-linear kernels and non-Euclidean metrics to manipulate them, makes it difficult to objectively compare different methods, and to address fundamental questions on how one feature space is related to another. In this work we introduce a framework to compare different sets of descriptors, and different ways of transforming them by means of metrics and kernels, in terms of the structure of the feature space that they induce. We define diagnostic tools to determine whether alternative feature spaces contain equivalent amounts of information, and whether the common information is substantially distorted when going from one feature space to another. We compare, in particular, representations that are built in terms of n-body correlations of the atom density, quantitatively assessing the information loss associated with the use of low-order features. We also investigate the impact of different choices of basis functions and hyperparameters of the widely used SOAP and Behler-Parrinello features, and investigate how the use of non-linear kernels, and of a Wasserstein-type metric, change the structure of the feature space in comparison to a simpler linear feature space.
107 - Mi-Song Dupuy 2017
In this article, a numerical analysis of the projector augmented-wave (PAW) method is presented, restricted to the case of dimension one with Dirac potentials modeling the nuclei in a periodic setting. The PAW method is widely used in electronic ab initio calculations, in conjunction with pseudopotentials. It consists in replacing the original electronic Hamiltonian $H$ by a pseudo-Hamiltonian $H^{PAW}$ via the PAW transformation acting in balls around each nuclei. Formally, the new eigenvalue problem has the same eigenvalues as $H$ and smoother eigenfunctions. In practice, the pseudo-Hamiltonian $H^{PAW}$ has to be truncated, introducing an error that is rarely analyzed. In this paper, error estimates on the lowest PAW eigenvalue are proved for the one-dimensional periodic Schrodinger operator with double Dirac potentials.
It would be a natural expectation that only major peaks, not all of them, would make an important contribution to the characterization of the XRD pattern. We developed a scheme that can identify which peaks are relavant to what extent by using auto-encoder technique to construct a feature space for the XRD peak patterns. Individual XRD patterns are projected onto a single point in the two-dimensional feature space constructed using the method. If the point is significantly shifted when a peak of interest is masked, then we can say the peak is relevant for the characterization represented by the point on the space. In this way, we can formulate the relevancy quantitatively. By using this scheme, we actually found such a peak with a significant peak intensity but low relevancy in the characterization of the structure. The peak is not easily explained by the physical viewpoint such as the higher-order peaks from the same plane index, being a heuristic finding by the power of machine-learning.
Machine learning (ML) entered the field of computational micromagnetics only recently. The main objective of these new approaches is the automatization of solutions of parameter-dependent problems in micromagnetism such as fast response curve estimation modeled by the Landau-Lifschitz-Gilbert (LLG) equation. Data-driven models for the solution of time- and parameter-dependent partial differential equations require high dimensional training data-structures. ML in this case is by no means a straight-forward trivial task, it needs algorithmic and mathematical innovation. Our work introduces theoretical and computational conceptions of certain kernel and neural network based dimensionality reduction approaches for efficient prediction of solutions via the notion of low-dimensional feature space integration. We introduce efficient treatment of kernel ridge regression and kernel principal component analysis via low-rank approximation. A second line follows neural network (NN) autoencoders as nonlinear data-dependent dimensional reduction for the training data with focus on accurate latent space variable description suitable for a feature space integration scheme. We verify and compare numerically by means of a NIST standard problem. The low-rank kernel method approach is fast and surprisingly accurate, while the NN scheme can even exceed this level of accuracy at the expense of significantly higher costs.
89 - Wenjia Wang , Yi-Hui Zhou 2020
In the multivariate regression, also referred to as multi-task learning in machine learning, the goal is to recover a vector-valued function based on noisy observations. The vector-valued function is often assumed to be of low rank. Although the multivariate linear regression is extensively studied in the literature, a theoretical study on the multivariate nonlinear regression is lacking. In this paper, we study reduced rank multivariate kernel ridge regression, proposed by cite{mukherjee2011reduced}. We prove the consistency of the function predictor and provide the convergence rate. An algorithm based on nuclear norm relaxation is proposed. A few numerical examples are presented to show the smaller mean squared prediction error comparing with the elementwise univariate kernel ridge regression.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا