ترغب بنشر مسار تعليمي؟ اضغط هنا

The electronic density of states (DOS) quantifies the distribution of the energy levels that can be occupied by electrons in a quasiparticle picture, and is central to modern electronic structure theory. It also underpins the computation and interpre tation of experimentally observable material properties such as optical absorption and electrical conductivity. We discuss the challenges inherent in the construction of a machine-learning (ML) framework aimed at predicting the DOS as a combination of local contributions that depend in turn on the geometric configuration of neighbours around each atom, using quasiparticle energy levels from density functional theory as training data. We present a challenging case study that includes configurations of silicon spanning a broad set of thermodynamic conditions, ranging from bulk structures to clusters, and from semiconducting to metallic behavior. We compare different approaches to represent the DOS, and the accuracy of predicting quantities such as the Fermi level, the DOS at the Fermi level, or the band energy, either directly or as a side-product of the evaluation of the DOS. The performance of the model depends crucially on the smoothening of the DOS, and there is a tradeoff to be made between the systematic error associated with the smoothening and the error in the ML model for a specific structure. We demonstrate the usefulness of this approach by computing the density of states of a large amorphous silicon sample, for which it would be prohibitively expensive to compute the DOS by direct electronic structure calculations, and show how the atom-centred decomposition of the DOS that is obtained through our model can be used to extract physical insights into the connections between structural and electronic features.
Nuclear Magnetic Resonance (NMR) spectroscopy is particularly well-suited to determine the structure of molecules and materials in powdered form. Structure determination usually proceeds by finding the best match between experimentally observed NMR c hemical shifts and those of candidate structures. Chemical shifts for the candidate configurations have traditionally been computed by electronic-structure methods, and more recently predicted by machine learning. However, the reliability of the determination depends on the errors in the predicted shifts. Here we propose a Bayesian framework for determining the confidence in the identification of the experimental crystal structure, based on knowledge of the typical error in the electronic structure methods. We also extend the recently-developed ShiftML machine-learning model, including the evaluation of the uncertainty of its predictions. We demonstrate the approach on the determination of the structures of six organic molecular crystals. We critically assess the reliability of the structure determinations, facilitated by the introduction of a visualization of the of similarity between candidate configurations in terms of their chemical shifts and their structures. We also show that the commonly used values for the errors in calculated $^{13}$C shifts are underestimated, and that more accurate, self-consistently determined uncertainties make it possible to use $^{13}$C shifts to improve the accuracy of structure determinations.
Machine learning of atomic-scale properties is revolutionizing molecular modelling, making it possible to evaluate inter-atomic potentials with first-principles accuracy, at a fraction of the costs. The accuracy, speed and reliability of machine-lear ning potentials, however, depends strongly on the way atomic configurations are represented, i.e. the choice of descriptors used as input for the machine learning method. The raw Cartesian coordinates are typically transformed in fingerprints, or symmetry functions, that are designed to encode, in addition to the structure, important properties of the potential-energy surface like its invariances with respect to rotation, translation and permutation of like atoms. Here we discuss automatic protocols to select a number of fingerprints out of a large pool of candidates, based on the correlations that are intrinsic to the training data. This procedure can greatly simplify the construction of neural network potentials that strike the best balance between accuracy and computational efficiency, and has the potential to accelerate by orders of magnitude the evaluation of Gaussian Approximation Potentials based on the Smooth Overlap of Atomic Positions kernel. We present applications to the construction of neural network potentials for water and for an Al-Mg-Si alloy, and to the prediction of the formation energies of small organic molecules using Gaussian process regression.
High-throughput computational materials searches generate large databases of locally-stable structures. Conventionally, the needle-in-a-haystack search for the few experimentally-synthesizable compounds is performed using a convex hull construction, which identifies structures stabilized by manipulation of a particular thermodynamic constraint (for example pressure or composition) chosen based on prior experimental evidence or intuition. To address the biased nature of this procedure we introduce a generalized convex hull framework. Convex hulls are constructed on data-driven principal coordinates, which represent the full structural diversity of the database. Their coupling to experimentally-realizable constraints hints at the conditions that are most likely to stabilize a given configuration. The probabilistic nature of our framework also addresses the uncertainty stemming from the use of approximate models during database construction, and eliminates redundant structures. The remaining small set of candidates that have a high probability of being synthesizable provide a much needed starting point for the determination of viable synthetic pathways.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا