ترغب بنشر مسار تعليمي؟ اضغط هنا

A scalable weakly-synchronous algorithm for solving partial differential equations

60   0   0.0 ( 0 )
 نشر من قبل Aditya Konduri
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Synchronization overheads pose a major challenge as applications advance towards extreme scales. In current large-scale algorithms, synchronization as well as data communication delay the parallel computations at each time step in a time-dependent partial differential equation (PDE) solver. This creates a new scaling wall when moving towards exascale. We present a weakly-synchronous algorithm based on novel asynchrony-tolerant (AT) finite-difference schemes that relax synchronization at a mathematical level. We utilize remote memory access programming schemes that have been shown to provide significant speedup on modern supercomputers, to efficiently implement communications suitable for AT schemes, and compare to two-sided communications that are state-of-practice. We present results from simulations of Burgers equation as a model of multi-scale strongly non-linear dynamical systems. Our algorithm demonstrate excellent scalability of the new AT schemes for large-scale computing, with a speedup of up to $3.3$x in communication time and $2.19$x in total runtime. We expect that such schemes can form the basis for exascale PDE algorithms.

قيم البحث

اقرأ أيضاً

A hybrid algorithm based on machine learning and quantum ensemble learning is proposed that is capable of finding a solution to a partial differential equation with good precision and favorable scaling in the required number of qubits. The classical part is composed by training several regressors (weak-learners), capable of solving a partial differential equation using machine learning. The quantum part consists of adapting the QBoost algorithm to solve regression problems. We have successfully applied our framework to solve the 1D Burgers equation with viscosity, showing that the quantum ensemble method really improves the solutions produced by weak-learners. We also implemented the algorithm on the D-Wave Systems, confirming the best performance of the quantum solution compared to the simulated annealing and exact solver methods, given the memory limitations of our classical computer used in the comparison.
We present and experimentally realize a quantum algorithm for efficiently solving the following problem: given an $Ntimes N$ matrix $mathcal{M}$, an $N$-dimensional vector $textbf{emph{b}}$, and an initial vector $textbf{emph{x}}(0)$, obtain a target vector $textbf{emph{x}}(t)$ as a function of time $t$ according to the constraint $dtextbf{emph{x}}(t)/dt=mathcal{M}textbf{emph{x}}(t)+textbf{emph{b}}$. We show that our algorithm exhibits an exponential speedup over its classical counterpart in certain circumstances. In addition, we demonstrate our quantum algorithm for a $4times4$ linear differential equation using a 4-qubit nuclear magnetic resonance quantum information processor. Our algorithm provides a key technique for solving many important problems which rely on the solutions to linear differential equations.
We describe a neural-based method for generating exact or approximate solutions to differential equations in the form of mathematical expressions. Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly. Our method uses a neural architecture for learning mathematical expressions to optimize a customizable objective, and is scalable, compact, and easily adaptable for a variety of tasks and configurations. The system has been shown to effectively find exact or approximate symbolic solutions to various differential equations with applications in natural sciences. In this work, we highlight how our method applies to partial differential equations over multiple variables and more complex boundary and initial value conditions.
107 - Quanhui Zhu , Jiang Yang 2021
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high ord er derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
In this work we apply the Deep Galerkin Method (DGM) described in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations that arise in quantitative finance applications including option pricing, optimal execution, mean field games, etc. The main idea behind DGM is to represent the unknown function of interest using a deep neural network. A key feature of this approach is the fact that, unlike other commonly used numerical approaches such as finite difference methods, it is mesh-free. As such, it does not suffer (as much as other numerical methods) from the curse of dimensionality associated with highdimensional PDEs and PDE systems. The main goals of this paper are to elucidate the features, capabilities and limitations of DGM by analyzing aspects of its implementation for a number of different PDEs and PDE systems. Additionally, we present: (1) a brief overview of PDEs in quantitative finance along with numerical methods for solving them; (2) a brief overview of deep learning and, in particular, the notion of neural networks; (3) a discussion of the theoretical foundations of DGM with a focus on the justification of why this method is expected to perform well.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا