ترغب بنشر مسار تعليمي؟ اضغط هنا

QBoost for regression problems: solving partial differential equations

71   0   0.0 ( 0 )
 نشر من قبل Caio Goes
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A hybrid algorithm based on machine learning and quantum ensemble learning is proposed that is capable of finding a solution to a partial differential equation with good precision and favorable scaling in the required number of qubits. The classical part is composed by training several regressors (weak-learners), capable of solving a partial differential equation using machine learning. The quantum part consists of adapting the QBoost algorithm to solve regression problems. We have successfully applied our framework to solve the 1D Burgers equation with viscosity, showing that the quantum ensemble method really improves the solutions produced by weak-learners. We also implemented the algorithm on the D-Wave Systems, confirming the best performance of the quantum solution compared to the simulated annealing and exact solver methods, given the memory limitations of our classical computer used in the comparison.



قيم البحث

اقرأ أيضاً

Synchronization overheads pose a major challenge as applications advance towards extreme scales. In current large-scale algorithms, synchronization as well as data communication delay the parallel computations at each time step in a time-dependent pa rtial differential equation (PDE) solver. This creates a new scaling wall when moving towards exascale. We present a weakly-synchronous algorithm based on novel asynchrony-tolerant (AT) finite-difference schemes that relax synchronization at a mathematical level. We utilize remote memory access programming schemes that have been shown to provide significant speedup on modern supercomputers, to efficiently implement communications suitable for AT schemes, and compare to two-sided communications that are state-of-practice. We present results from simulations of Burgers equation as a model of multi-scale strongly non-linear dynamical systems. Our algorithm demonstrate excellent scalability of the new AT schemes for large-scale computing, with a speedup of up to $3.3$x in communication time and $2.19$x in total runtime. We expect that such schemes can form the basis for exascale PDE algorithms.
We describe a neural-based method for generating exact or approximate solutions to differential equations in the form of mathematical expressions. Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly. Our method uses a neural architecture for learning mathematical expressions to optimize a customizable objective, and is scalable, compact, and easily adaptable for a variety of tasks and configurations. The system has been shown to effectively find exact or approximate symbolic solutions to various differential equations with applications in natural sciences. In this work, we highlight how our method applies to partial differential equations over multiple variables and more complex boundary and initial value conditions.
Quantum computers can produce a quantum encoding of the solution of a system of differential equations exponentially faster than a classical algorithm can produce an explicit description. However, while high-precision quantum algorithms for linear or dinary differential equations are well established, the best previous quantum algorithms for linear partial differential equations (PDEs) have complexity $mathrm{poly}(1/epsilon)$, where $epsilon$ is the error tolerance. By developing quantum algorithms based on adaptive-order finite difference methods and spectral methods, we improve the complexity of quantum algorithms for linear PDEs to be $mathrm{poly}(d, log(1/epsilon))$, where $d$ is the spatial dimension. Our algorithms apply high-precision quantum linear system algorithms to systems whose condition numbers and approximation errors we bound. We develop a finite difference algorithm for the Poisson equation and a spectral algorithm for more general second-order elliptic equations.
107 - Quanhui Zhu , Jiang Yang 2021
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high ord er derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
We propose a quantum algorithm to solve systems of nonlinear differential equations. Using a quantum feature map encoding, we define functions as expectation values of parametrized quantum circuits. We use automatic differentiation to represent funct ion derivatives in an analytical form as differentiable quantum circuits (DQCs), thus avoiding inaccurate finite difference procedures for calculating gradients. We describe a hybrid quantum-classical workflow where DQCs are trained to satisfy differential equations and specified boundary conditions. As a particular example setting, we show how this approach can implement a spectral method for solving differential equations in a high-dimensional feature space. From a technical perspective, we design a Chebyshev quantum feature map that offers a powerful basis set of fitting polynomials and possesses rich expressivity. We simulate the algorithm to solve an instance of Navier-Stokes equations, and compute density, temperature and velocity profiles for the fluid flow in a convergent-divergent nozzle.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا