ترغب بنشر مسار تعليمي؟ اضغط هنا

Combined first--principles calculation and neural--network correction approach as a powerful tool in computational physics and chemistry

147   0   0.0 ( 0 )
 نشر من قبل Xiujun Wang
 تاريخ النشر 2003
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite of their success, the results of first-principles quantum mechanical calculations contain inherent numerical errors caused by various approximations. We propose here a neural-network algorithm to greatly reduce these inherent errors. As a demonstration, this combined quantum mechanical calculation and neural-network correction approach is applied to the evaluation of standard heat of formation $DelH$ and standard Gibbs energy of formation $DelG$ for 180 organic molecules at 298 K. A dramatic reduction of numerical errors is clearly shown with systematic deviations being eliminated. For examples, the root--mean--square deviation of the calculated $DelH$ ($DelG$) for the 180 molecules is reduced from 21.4 (22.3) kcal$cdotp$mol$^{-1}$ to 3.1 (3.3) kcal$cdotp$mol$^{-1}$ for B3LYP/6-311+G({it d,p}) and from 12.0 (12.9) kcal$cdotp$mol$^{-1}$ to 3.3 (3.4) kcal$cdotp$mol$^{-1}$ for B3LYP/6-311+G(3{it df},2{it p}) before and after the neural-network correction.



قيم البحث

اقرأ أيضاً

In this paper we recreate, and improve, the binary classification method for particles proposed in Roe et al. (2005) paper Boosted decision trees as an alternative to artificial neural networks for particle identification. Such particles are tau neut rinos, which we will refer to as background, and electronic neutrinos: the signal we are interested in. In the original paper the preferred algorithm is a Boosted decision tree. This is due to its low effort tuning and good overall performance at the time. Our choice for implementation is a deep neural network, faster and more promising in performance. We will show how, using modern techniques, we are able to improve on the original result, both in accuracy and in training time.
New heterogeneous computing paradigms on dedicated hardware with increased parallelization, such as Field Programmable Gate Arrays (FPGAs), offer exciting solutions with large potential gains. The growing applications of machine learning algorithms i n particle physics for simulation, reconstruction, and analysis are naturally deployed on such platforms. We demonstrate that the acceleration of machine learning inference as a web service represents a heterogeneous computing solution for particle physics experiments that potentially requires minimal modification to the current computing model. As examples, we retrain the ResNet-50 convolutional neural network to demonstrate state-of-the-art performance for top quark jet tagging at the LHC and apply a ResNet-50 model with transfer learning for neutrino event classification. Using Project Brainwave by Microsoft to accelerate the ResNet-50 image classification model, we achieve average inference times of 60 (10) milliseconds with our experimental physics software framework using Brainwave as a cloud (edge or on-premises) service, representing an improvement by a factor of approximately 30 (175) in model inference latency over traditional CPU inference in current experimental hardware. A single FPGA service accessed by many CPUs achieves a throughput of 600--700 inferences per second using an image batch of one, comparable to large batch-size GPU throughput and significantly better than small batch-size GPU throughput. Deployed as an edge or cloud service for the particle physics computing model, coprocessor accelerators can have a higher duty cycle and are potentially much more cost-effective.
Image registration is the inference of transformations relating noisy and distorted images. It is fundamental in computer vision, experimental physics, and medical imaging. Many algorithms and analyses exist for inferring shift, rotation, and nonline ar transformations between image coordinates. Even in the simplest case of translation, however, all known algorithms are biased and none have achieved the precision limit of the Cramer Rao bound (CRB). Following Bayesian inference, we prove that the standard method of shifting one image to match another cannot reach the CRB. We show that the bias can be cured and the CRB reached if, instead, we use Super Registration: learning an optimal model for the underlying image and shifting that to match the data. Our theory shows that coarse-graining oversampled images can improve registration precision of the standard method. For oversampled data, our method does not yield striking improvements as measured by eye. In these cases, however, we show our new registration method can lead to dramatic improvements in extractable information, for example, inferring $10times$ more precise particle positions.
We compute the thermal conductivity of water within linear response theory from equilibrium molecular dynamics simulations, by adopting two different approaches. In one, the potential energy surface (PES) is derived on the fly from the electronic gro und state of density functional theory (DFT) and the corresponding analytical expression is used for the energy flux. In the other, the PES is represented by a deep neural network (DNN) trained on DFT data, whereby the PES has an explicit local decomposition and the energy flux takes a particularly simple expression. By virtue of a gauge invariance principle, established by Marcolongo, Umari, and Baroni, the two approaches should be equivalent if the PES were reproduced accurately by the DNN model. We test this hypothesis by calculating the thermal conductivity, at the GGA (PBE) level of theory, using the direct formulation and its DNN proxy, finding that both approaches yield the same conductivity, in excess of the experimental value by approximately 60%. Besides being numerically much more efficient than its direct DFT counterpart, the DNN scheme has the advantage of being easily applicable to more sophisticated DFT approximations, such as meta-GGA and hybrid functionals, for which it would be hard to derive analytically the expression of the energy flux. We find in this way, that a DNN model, trained on meta-GGA (SCAN) data, reduce the deviation from experiment of the predicted thermal conductivity by about 50%, leaving the question open as to whether the residual error is due to deficiencies of the functional, to a neglect of nuclear quantum effects in the atomic dynamics, or, likely, to a combination of the two.
129 - O. Actis , M. Erdmann , R. Fischer 2008
VISPA is a novel development environment for high energy physics analyses, based on a combination of graphical and textual steering. The primary aim of VISPA is to support physicists in prototyping, performing, and verifying a data analysis of any co mplexity. We present example screenshots, and describe the underlying software concepts.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا