ترغب بنشر مسار تعليمي؟ اضغط هنا

Embedding methods for large-scale surface calculations

217   0   0.0 ( 0 )
 نشر من قبل John Trail
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

One of the goals in the development of large scale electronic structure methods is to perform calculations explicitly for a localised region of a system, while still taking into account the rest of the system outside of this region. An example of this in surface physics would be to embed an adsorbate and a few surface atoms into an extended substrate, hence considerably reducing computational costs. Here we apply the constrained electron density method of embedding a Kohn-Sham system in a substrate system (first described by P. Cortonacite{1} and T.A. Wesolowskicite{2}), within a plane-wave basis and pseudopotential framework. This approach divides the charge density of the system into substrate and embedded charge densities, the sum of which is the charge density of the actual system of interest. Two test cases are considered. First we construct fcc bulk aluminium by embedding one cubic lattice of atoms within another. Second, we examine a model surface/adsorbate system of aluminium on aluminium and compare with full Kohn-Sham results.



قيم البحث

اقرأ أيضاً

Given the widespread use of density functional theory (DFT), there is an increasing need for the ability to model large systems (beyond 1,000 atoms). We present a brief overview of the large-scale DFT code Conquest, which is capable of modelling such large systems, and discuss approaches to the generation of consistent, well-converged pseudo-atomic basis sets which will allow such large scale calculations. We present tests of these basis sets for a variety of materials, comparing to fully converged plane wave results using the same pseudopotentials and grids.
The search for new materials, based on computational screening, relies on methods that accurately predict, in an automatic manner, total energy, atomic-scale geometries, and other fundamental characteristics of materials. Many technologically importa nt material properties directly stem from the electronic structure of a material, but the usual workhorse for total energies, namely density-functional theory, is plagued by fundamental shortcomings and errors from approximate exchange-correlation functionals in its prediction of the electronic structure. At variance, the $GW$ method is currently the state-of-the-art {em ab initio} approach for accurate electronic structure. It is mostly used to perturbatively correct density-functional theory results, but is however computationally demanding and also requires expert knowledge to give accurate results. Accordingly, it is not presently used in high-throughput screening: fully automatized algorithms for setting up the calculations and determining convergence are lacking. In this work we develop such a method and, as a first application, use it to validate the accuracy of $G_0W_0$ using the PBE starting point, and the Godby-Needs plasmon pole model ($G_0W_0^textrm{GN}$@PBE), on a set of about 80 solids. The results of the automatic convergence study utilized provides valuable insights. Indeed, we find correlations between computational parameters that can be used to further improve the automatization of $GW$ calculations. Moreover, we find that $G_0W_0^textrm{GN}$@PBE shows a correlation between the PBE and the $G_0W_0^textrm{GN}$@PBE gaps that is much stronger than that between $GW$ and experimental gaps. However, the $G_0W_0^textrm{GN}$@PBE gaps still describe the experimental gaps more accurately than a linear model based on the PBE gaps.
Accurate and efficient predictions of the quasiparticle properties of complex materials remain a major challenge due to the convergence issue and the unfavorable scaling of the computational cost with respect to the system size. Quasiparticle $GW$ ca lculations for two dimensional (2D) materials are especially difficult. The unusual analytical behaviors of the dielectric screening and the electron self-energy of 2D materials make the conventional Brillouin zone (BZ) integration approach rather inefficient and require an extremely dense $k$-grid to properly converge the calculated quasiparticle energies. In this work, we present a combined non-uniform sub-sampling and analytical integration method that can drastically improve the efficiency of the BZ integration in 2D $GW$ calculations. Our work is distinguished from previous work in that, instead of focusing on the intricate dielectric matrix or the screened Coulomb interaction matrix, we exploit the analytical behavior of various terms of the convolved self-energy $Sigma(mathbf{q})$ in the small $mathbf{q}$ limit. This method, when combined with another accelerated $GW$ method that we developed recently, can drastically speed-up (by over three orders of magnitude) $GW$ calculations for 2D materials. Our method allows fully converged $GW$ calculations for complex 2D systems at a fraction of computational cost, facilitating future high throughput screening of the quasiparticle properties of 2D semiconductors for various applications. To demonstrate the capability and performance of our new method, we have carried out fully converged $GW$ calculations for monolayer C$_2$N, a recently discovered 2D material with a large unit cell, and investigate its quasiparticle band structure in detail.
171 - D. R. Bowler , T. Miyazaki 2011
Linear scaling methods, or O(N) methods, have computational and memory requirements which scale linearly with the number of atoms in the system, N, in contrast to standard approaches which scale with the cube of the number of atoms. These methods, wh ich rely on the short-ranged nature of electronic structure, will allow accurate, ab initio simulations of systems of unprecedented size. The theory behind the locality of electronic structure is described and related to physical properties of systems to be modelled, along with a survey of recent developments in real-space methods which are important for efficient use of high performance computers. The linear scaling methods proposed to date can be divided into seven different areas, and the applicability, efficiency and advantages of the methods proposed in these areas is then discussed. The applications of linear scaling methods, as well as the implementations available as computer programs, are considered. Finally, the prospects for and the challenges facing linear scaling methods are discussed.
We propose and analyze algorithms for distributionally robust optimization of convex losses with conditional value at risk (CVaR) and $chi^2$ divergence uncertainty sets. We prove that our algorithms require a number of gradient evaluations independe nt of training set size and number of parameters, making them suitable for large-scale applications. For $chi^2$ uncertainty sets these are the first such guarantees in the literature, and for CVaR our guarantees scale linearly in the uncertainty level rather than quadratically as in previous work. We also provide lower bounds proving the worst-case optimality of our algorithms for CVaR and a penalized version of the $chi^2$ problem. Our primary technical contributions are novel bounds on the bias of batch robust risk estimation and the variance of a multilevel Monte Carlo gradient estimator due to [Blanchet & Glynn, 2015]. Experiments on MNIST and ImageNet confirm the theoretical scaling of our algorithms, which are 9--36 times more efficient than full-batch methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا