ترغب بنشر مسار تعليمي؟ اضغط هنا

On-the-fly Autonomous Control of Neutron Diffraction via Physics-Informed Bayesian Active Learning

103   0   0.0 ( 0 )
 نشر من قبل Austin McDannald Ph.D.
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Neutron scattering is a unique and versatile characterization technique for probing the magnetic structure and dynamics of materials. However, instruments at neutron scattering facilities in the world is limited, and instruments at such facilities are perennially oversubscribed. We demonstrate a significant reduction in experimental time required for neutron diffraction experiments by implementation of autonomous navigation of measurement parameter space through machine learning. Prior scientific knowledge and Bayesian active learning are used to dynamically steer the sequence of measurements. We developed the autonomous neutron diffraction explorer (ANDiE) and used it to determine the magnetic order of MnO and Fe1.09Te. ANDiE can determine the Neel temperature of the materials with 5-fold enhancement in efficiency and correctly identify the transition dynamics via physics-informed Bayesian inference. ANDiEs active learning approach is broadly applicable to a variety of neutron-based experiments and can open the door for neutron scattering as a tool of accelerated materials discovery.



قيم البحث

اقرأ أيضاً

Active learning - the field of machine learning (ML) dedicated to optimal experiment design, has played a part in science as far back as the 18th century when Laplace used it to guide his discovery of celestial mechanics [1]. In this work we focus a closed-loop, active learning-driven autonomous system on another major challenge, the discovery of advanced materials against the exceedingly complex synthesis-processes-structure-property landscape. We demonstrate autonomous research methodology (i.e. autonomous hypothesis definition and evaluation) that can place complex, advanced materials in reach, allowing scientists to fail smarter, learn faster, and spend less resources in their studies, while simultaneously improving trust in scientific results and machine learning tools. Additionally, this robot science enables science-over-the-network, reducing the economic impact of scientists being physically separated from their labs. We used the real-time closed-loop, autonomous system for materials exploration and optimization (CAMEO) at the synchrotron beamline to accelerate the fundamentally interconnected tasks of rapid phase mapping and property optimization, with each cycle taking seconds to minutes, resulting in the discovery of a novel epitaxial nanocomposite phase-change memory material.
Investment in brighter sources and larger and faster detectors has accelerated the speed of data acquisition at national user facilities. The accelerated data acquisition offers many opportunities for discovery of new materials, but it also presents a daunting challenge. The rate of data acquisition far exceeds the current speed of data quality assessment, resulting in less than optimal data and data coverage, which in extreme cases forces recollection of data. Herein, we show how this challenge can be addressed through development of an approach that makes routine data assessment automatic and instantaneous. Through extracting and visualizing customized attributes in real time, data quality and coverage, as well as other scientifically relevant information contained in large datasets is highlighted. Deployment of such an approach not only improves the quality of data but also helps optimize usage of expensive characterization resources by prioritizing measurements of highest scientific impact. We anticipate our approach to become a starting point for a sophisticated decision-tree that optimizes data quality and maximizes scientific content in real time through automation. With these efforts to integrate more automation in data collection and analysis, we can truly take advantage of the accelerating speed of data acquisition.
Both experimental and computational methods for the exploration of structure, functionality, and properties of materials often necessitate the search across broad parameter spaces to discover optimal experimental conditions and regions of interest in the image space or parameter space of computational models. The direct grid search of the parameter space tends to be extremely time-consuming, leading to the development of strategies balancing exploration of unknown parameter spaces and exploitation towards required performance metrics. However, classical Bayesian optimization strategies based on the Gaussian process (GP) do not readily allow for the incorporation of the known physical behaviors or past knowledge. Here we explore a hybrid optimization/exploration algorithm created by augmenting the standard GP with a structured probabilistic model of the expected systems behavior. This approach balances the flexibility of the non-parametric GP approach with a rigid structure of physical knowledge encoded into the parametric model. The fully Bayesian treatment of the latter allows additional control over the optimization via the selection of priors for the model parameters. The method is demonstrated for a noisy version of the classical objective function used to evaluate optimization algorithms and further extended to physical lattice models. This methodology is expected to be universally suitable for injecting prior knowledge in the form of physical models and past data in the Bayesian optimization framework
Bayesian estimation approaches, which are capable of combining the information of experimental data from different likelihood functions to achieve high precisions, have been widely used in phase estimation via introducing a controllable auxiliary pha se. Here, we present a non-adaptive Bayesian phase estimation (BPE) algorithms with an ingenious update rule of the auxiliary phase designed via active learning. Unlike adaptive BPE algorithms, the auxiliary phase in our algorithm is determined by a pre-established update rule with simple statistical analysis of a small batch of data, instead of complex calculations in every update trails. As the number of measurements for a same amount of Bayesian updates is significantly reduced via active learning, our algorithm can work as efficient as adaptive ones and shares the advantages (such as wide dynamic range and perfect noise robustness) of non-adaptive ones. Our algorithm is of promising applications in various practical quantum sensors such as atomic clocks and quantum magnetometers.
The learning rate (LR) schedule is one of the most important hyper-parameters needing careful tuning in training DNNs. However, it is also one of the least automated parts of machine learning systems and usually costs significant manual effort and co mputing. Though there are pre-defined LR schedules and optimizers with adaptive LR, they introduce new hyperparameters that need to be tuned separately for different tasks/datasets. In this paper, we consider the question: Can we automatically tune the LR over the course of training without human involvement? We propose an efficient method, AutoLRS, which automatically optimizes the LR for each training stage by modeling training dynamics. AutoLRS aims to find an LR applied to every $tau$ steps that minimizes the resulted validation loss. We solve this black-box optimization on the fly by Bayesian optimization (BO). However, collecting training instances for BO requires a system to evaluate each LR queried by BOs acquisition function for $tau$ steps, which is prohibitively expensive in practice. Instead, we apply each candidate LR for only $taulltau$ steps and train an exponential model to predict the validation loss after $tau$ steps. This mutual-training process between BO and the loss-prediction model allows us to limit the training steps invested in the BO search. We demonstrate the advantages and the generality of AutoLRS through extensive experiments of training DNNs for tasks from diverse domains using different optimizers. The LR schedules auto-generated by AutoLRS lead to a speedup of $1.22times$, $1.43times$, and $1.5times$ when training ResNet-50, Transformer, and BERT, respectively, compared to the LR schedules in their original papers, and an average speedup of $1.31times$ over state-of-the-art heavily-tuned LR schedules.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا