Factorized Machine Learning for Performance Modeling of Massively Parallel Heterogeneous Physical Simulations


الملخص بالإنكليزية

We demonstrate neural-network runtime prediction for complex, many-parameter, massively parallel, heterogeneous-physics simulations running on cloud-based MPI clusters. Because individual simulations are so expensive, it is crucial to train the network on a limited dataset despite the potentially large input space of the physics at each point in the spatial domain. We achieve this using a two-part strategy. First, we perform data-driven static load balancing using regression coefficients extracted from small simulations, which both improves parallel performance and reduces the dependency of the runtime on the precise spatial layout of the heterogeneous physics. Second, we divide the execution time of these load-balanced simulations into computation and communication, factoring crude asymptotic scalings out of each term, and training neural nets for the remaining factor coefficients. This strategy is implemented for Meep, a popular and complex open-source electrodynamics simulation package, and are validated for heterogeneous simulations drawn from published engineering models.

تحميل البحث