ﻻ يوجد ملخص باللغة العربية
Synchronization overheads pose a major challenge as applications advance towards extreme scales. In current large-scale algorithms, synchronization as well as data communication delay the parallel computations at each time step in a time-dependent partial differential equation (PDE) solver. This creates a new scaling wall when moving towards exascale. We present a weakly-synchronous algorithm based on novel asynchrony-tolerant (AT) finite-difference schemes that relax synchronization at a mathematical level. We utilize remote memory access programming schemes that have been shown to provide significant speedup on modern supercomputers, to efficiently implement communications suitable for AT schemes, and compare to two-sided communications that are state-of-practice. We present results from simulations of Burgers equation as a model of multi-scale strongly non-linear dynamical systems. Our algorithm demonstrate excellent scalability of the new AT schemes for large-scale computing, with a speedup of up to $3.3$x in communication time and $2.19$x in total runtime. We expect that such schemes can form the basis for exascale PDE algorithms.
A hybrid algorithm based on machine learning and quantum ensemble learning is proposed that is capable of finding a solution to a partial differential equation with good precision and favorable scaling in the required number of qubits. The classical
We present and experimentally realize a quantum algorithm for efficiently solving the following problem: given an $Ntimes N$ matrix $mathcal{M}$, an $N$-dimensional vector $textbf{emph{b}}$, and an initial vector $textbf{emph{x}}(0)$, obtain a target
We describe a neural-based method for generating exact or approximate solutions to differential equations in the form of mathematical expressions. Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly.
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high ord
In this work we apply the Deep Galerkin Method (DGM) described in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations that arise in quantitative finance applications including option pricing, optimal execution, mean