Do you want to publish a course? Click here

Machine learning methods for the prediction of micromagnetic magnetization dynamics

61   0   0.0 ( 0 )
 Added by Lukas Exl
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Machine learning (ML) entered the field of computational micromagnetics only recently. The main objective of these new approaches is the automatization of solutions of parameter-dependent problems in micromagnetism such as fast response curve estimation modeled by the Landau-Lifschitz-Gilbert (LLG) equation. Data-driven models for the solution of time- and parameter-dependent partial differential equations require high dimensional training data-structures. ML in this case is by no means a straight-forward trivial task, it needs algorithmic and mathematical innovation. Our work introduces theoretical and computational conceptions of certain kernel and neural network based dimensionality reduction approaches for efficient prediction of solutions via the notion of low-dimensional feature space integration. We introduce efficient treatment of kernel ridge regression and kernel principal component analysis via low-rank approximation. A second line follows neural network (NN) autoencoders as nonlinear data-dependent dimensional reduction for the training data with focus on accurate latent space variable description suitable for a feature space integration scheme. We verify and compare numerically by means of a NIST standard problem. The low-rank kernel method approach is fast and surprisingly accurate, while the NN scheme can even exceed this level of accuracy at the expense of significantly higher costs.



rate research

Read More

Material scientists are increasingly adopting the use of machine learning (ML) for making potentially important decisions, such as, discovery, development, optimization, synthesis and characterization of materials. However, despite MLs impressive performance in commercial applications, several unique challenges exist when applying ML in materials science applications. In such a context, the contributions of this work are twofold. First, we identify common pitfalls of existing ML techniques when learning from underrepresented/imbalanced material data. Specifically, we show that with imbalanced data, standard methods for assessing quality of ML models break down and lead to misleading conclusions. Furthermore, we found that the models own confidence score cannot be trusted and model introspection methods (using simpler models) do not help as they result in loss of predictive performance (reliability-explainability trade-off). Second, to overcome these challenges, we propose a general-purpose explainable and reliable machine-learning framework. Specifically, we propose a novel pipeline that employs an ensemble of simpler models to reliably predict material properties. We also propose a transfer learning technique and show that the performance loss due to models simplicity can be overcome by exploiting correlations among different material properties. A new evaluation metric and a trust score to better quantify the confidence in the predictions are also proposed. To improve the interpretability, we add a rationale generator component to our framework which provides both model-level and decision-level explanations. Finally, we demonstrate the versatility of our technique on two applications: 1) predicting properties of crystalline compounds, and 2) identifying novel potentially stable solar cell materials.
This article presents a general framework for recovering missing dynamical systems using available data and machine learning techniques. The proposed framework reformulates the prediction problem as a supervised learning problem to approximate a map that takes the memories of the resolved and identifiable unresolved variables to the missing components in the resolved dynamics. We demonstrate the effectiveness of the proposed framework with a theoretical guarantee of a path-wise convergence of the resolved variables up to finite time and numerical tests on prototypical models in various scientific domains. These include the 57-mode barotropic stress models with multiscale interactions that mimic the blocked and unblocked patterns observed in the atmosphere, the nonlinear Schr{o}dinger equation which found many applications in physics such as optics and Bose-Einstein-Condense, the Kuramoto-Sivashinsky equation which spatiotemporal chaotic pattern formation models trapped ion mode in plasma and phase dynamics in reaction-diffusion systems. While many machine learning techniques can be used to validate the proposed framework, we found that recurrent neural networks outperform kernel regression methods in terms of recovering the trajectory of the resolved components and the equilibrium one-point and two-point statistics. This superb performance suggests that recurrent neural networks are an effective tool for recovering the missing dynamics that involves approximation of high-dimensional functions.
113 - Qi Zhao , Zheng Zhao , Xiaoya Fan 2020
Secondary structure plays an important role in determining the function of non-coding RNAs. Hence, identifying RNA secondary structures is of great value to research. Computational prediction is a mainstream approach for predicting RNA secondary structure. Unfortunately, even though new methods have been proposed over the past 40 years, the performance of computational prediction methods has stagnated in the last decade. Recently, with the increasing availability of RNA structure data, new methods based on machine-learning technologies, especially deep learning, have alleviated the issue. In this review, we provide a comprehensive overview of RNA secondary structure prediction methods based on machine-learning technologies and a tabularized summary of the most important methods in this field. The current pending issues in the field of RNA secondary structure prediction and future trends are also discussed.
Atomistic or ab-initio molecular dynamics simulations are widely used to predict thermodynamics and kinetics and relate them to molecular structure. A common approach to go beyond the time- and length-scales accessible with such computationally expensive simulations is the definition of coarse-grained molecular models. Existing coarse-graining approaches define an effective interaction potential to match defined properties of high-resolution models or experimental data. In this paper, we reformulate coarse-graining as a supervised machine learning problem. We use statistical learning theory to decompose the coarse-graining error and cross-validation to select and compare the performance of different models. We introduce CGnets, a deep learning approach, that learns coarse-grained free energy functions and can be trained by a force matching scheme. CGnets maintain all physically relevant invariances and allow one to incorporate prior physics knowledge to avoid sampling of unphysical structures. We show that CGnets can capture all-atom explicit-solvent free energy surfaces with models using only a few coarse-grained beads and no solvent, while classical coarse-graining methods fail to capture crucial features of the free energy surface. Thus, CGnets are able to capture multi-body terms that emerge from the dimensionality reduction.
We establish a time-stepping learning algorithm and apply it to predict the solution of the partial differential equation of motion in micromagnetism as a dynamical system depending on the external field as parameter. The data-driven approach is based on nonlinear model order reduction by use of kernel methods for unsupervised learning, yielding a predictor for the magnetization dynamics without any need for field evaluations after a data generation and training phase as precomputation. Magnetization states from simulated micromagnetic dynamics associated with different external fields are used as training data to learn a low-dimensional representation in so-called feature space and a map that predicts the time-evolution in reduced space. Remarkably, only two degrees of freedom in feature space were enough to describe the nonlinear dynamics of a thin-film element. The approach has no restrictions on the spatial discretization and might be useful for fast determination of the response to an external field.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا