ترغب بنشر مسار تعليمي؟ اضغط هنا

Factorized Machine Learning for Performance Modeling of Massively Parallel Heterogeneous Physical Simulations

302   0   0.0 ( 0 )
 نشر من قبل Ardavan Oskooi
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We demonstrate neural-network runtime prediction for complex, many-parameter, massively parallel, heterogeneous-physics simulations running on cloud-based MPI clusters. Because individual simulations are so expensive, it is crucial to train the network on a limited dataset despite the potentially large input space of the physics at each point in the spatial domain. We achieve this using a two-part strategy. First, we perform data-driven static load balancing using regression coefficients extracted from small simulations, which both improves parallel performance and reduces the dependency of the runtime on the precise spatial layout of the heterogeneous physics. Second, we divide the execution time of these load-balanced simulations into computation and communication, factoring crude asymptotic scalings out of each term, and training neural nets for the remaining factor coefficients. This strategy is implemented for Meep, a popular and complex open-source electrodynamics simulation package, and are validated for heterogeneous simulations drawn from published engineering models.

قيم البحث

اقرأ أيضاً

Simulations of systems with quenched disorder are extremely demanding, suffering from the combined effect of slow relaxation and the need of performing the disorder average. As a consequence, new algorithms, improved implementations, and alternative and even purpose-built hardware are often instrumental for conducting meaningful studies of such systems. The ensuing demands regarding hardware availability and code complexity are substantial and sometimes prohibitive. We demonstrate how with a moderate coding effort leaving the overall structure of the simulation code unaltered as compared to a CPU implementation, very significant speed-ups can be achieved from a parallel code on GPU by mainly exploiting the trivial parallelism of the disorder samples and the near-trivial parallelism of the parallel tempering replicas. A combination of this massively parallel implementation with a careful choice of the temperature protocol for parallel tempering as well as efficient cluster updates allows us to equilibrate comparatively large systems with moderate computational resources.
We present a highly scalable Monte Carlo (MC) three-dimensional photon transport simulation platform designed for heterogeneous computing systems. Through the development of a massively parallel MC algorithm using the Open Computing Language (OpenCL) framework, this research extends our existing graphics processing unit (GPU)-accelerated MC technique to a highly scalable vendor-independent heterogeneous computing environment, achieving significantly improved performance and software portability. A number of parallel computing techniques are investigated to achieve portable performance over a wide range of computing hardware. Furthermore, multiple thread-level and device-level load-balancing strat- egies are developed to obtain efficient simulations using multiple central processing units (CPUs) and GPUs.
82 - L. Gargate 2006
A massively parallel simulation code, called textit{dHybrid}, has been developed to perform global scale studies of space plasma interactions. This code is based on an explicit hybrid model; the numerical stability and parallel scalability of the cod e are studied. A stabilization method for the explicit algorithm, for regions of near zero density, is proposed. Three-dimensional hybrid simulations of the interaction of the solar wind with unmagnetized artificial objects are presented, with a focus on the expansion of a plasma cloud into the solar wind, which creates a diamagnetic cavity and drives the Interplanetary Magnetic Field out of the expansion region. The dynamics of this system can provide insights into other similar scenarios, such as the interaction of the solar wind with unmagnetized planets.
Deep learning is transforming many areas in science, and it has great potential in modeling molecular systems. However, unlike the mature deployment of deep learning in computer vision and natural language processing, its development in molecular mod eling and simulations is still at an early stage, largely because the inductive biases of molecules are completely different from those of images or texts. Footed on these differences, we first reviewed the limitations of traditional deep learning models from the perspective of molecular physics, and wrapped up some relevant technical advancement at the interface between molecular modeling and deep learning. We do not focus merely on the ever more complex neural network models, instead, we emphasize the theories and ideas behind modern deep learning. We hope that transacting these ideas into molecular modeling will create new opportunities. For this purpose, we summarized several representative applications, ranging from supervised to unsupervised and reinforcement learning, and discussed their connections with the emerging trends in deep learning. Finally, we outlook promising directions which may help address the existing issues in the current framework of deep molecular modeling.
Computational study of molecules and materials from first principles is a cornerstone of physics, chemistry, and materials science, but limited by the cost of accurate and precise simulations. In settings involving many simulations, machine learning can reduce these costs, often by orders of magnitude, by interpolating between reference simulations. This requires representations that describe any molecule or material and support interpolation. We comprehensively review and discuss current representations and relations between them, using a unified mathematical framework based on many-body functions, group averaging, and tensor products. For selected state-of-the-art representations, we compare energy predictions for organic molecules, binary alloys, and Al-Ga-In sesquioxides in numerical experiments controlled for data distribution, regression method, and hyper-parameter optimization.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا