No Arabic abstract
We study the asymptotic behavior of solution of semi-linear PDEs. Neither periodicity nor ergodicity will be assumed. In return, we assume that the coefficients admit a limit in `{C}esaro sense. In such a case, the averaged coefficients could be discontinuous. We use probabilistic approach based on weak convergence for the associated backward stochastic differential equation in the S-topology to derive the averaged PDE. However, since the averaged coefficients are discontinuous, the classical viscosity solution is not defined for the averaged PDE. We then use the notion of $L^p-$viscosity solution introduced in cite{CCKS}. We use BSDEs techniques to establish the existence of $L^p-$viscosity solution for the averaged PDE. We establish weak continuity for the flow of the limit diffusion process and related the PDE limit to the backward stochastic differential equation via the representation of $L^p$-viscosity solution.
Recent machine learning algorithms dedicated to solving semi-linear PDEs are improved by using different neural network architectures and different parameterizations. These algorithms are compared to a new one that solves a fixed point problem by using deep learning techniques. This new algorithm appears to be competitive in terms of accuracy with the best existing algorithms.
A new class of explicit Milstein schemes, which approximate stochastic differential equations (SDEs) with superlinearly growing drift and diffusion coefficients, is proposed in this article. It is shown, under very mild conditions, that these explicit schemes converge in $mathcal L^p$ to the solution of the corresponding SDEs with optimal rate.
A conjecture appears in cite{milsteinscheme}, in the form of a remark, where it is stated that it is possible to construct, in a specified way, any high order explicit numerical schemes to approximate the solutions of SDEs with superlinear coefficients. We answer this conjecture affirmatively for the case of order 1.5 approximations and show that the suggested methodology works. Moreover, we explore the case of having H{o}lder continuous derivatives for the diffusion coefficients.
Consider an infinite system [partial_tu_t(x)=(mathscr{L}u_t)(x)+ sigmabigl(u_t(x)bigr)partial_tB_t(x)] of interacting It^{o} diffusions, started at a nonnegative deterministic bounded initial profile. We study local and global features of the solution under standard regularity assumptions on the nonlinearity $sigma$. We will show that, locally in time, the solution behaves as a collection of independent diffusions. We prove also that the $k$th moment Lyapunov exponent is frequently of sharp order $k^2$, in contrast to the continuous-space stochastic heat equation whose $k$th moment Lyapunov exponent can be of sharp order $k^3$. When the underlying walk is transient and the noise level is sufficiently low, we prove also that the solution is a.s. uniformly dissipative provided that the initial profile is in $ell^1(mathbf {Z}^d)$.
Linear systems with large differences between coefficients (discontinuous coefficients) arise in many cases in which partial differential equations(PDEs) model physical phenomena involving heterogeneous media. The standard approach to solving such problems is to use domain decomposition techniques, with domain boundaries conforming to the boundaries between the different media. This approach can be difficult to implement when the geometry of the domain boundaries is complicated or the grid is unstructured. This work examines the simple preconditioning technique of scaling the equations by dividing each equation by the Lp-norm of its coefficients. This preconditioning is called geometric scaling (GS). It has long been known that diagonal scaling can be useful in improving convergence, but there is no study on the general usefulness of this approach for discontinuous coefficients. GS was tested on several nonsymmetric linear systems with discontinuous coefficients derived from convection-diffusion elliptic PDEs with small to moderate convection terms. It is shown that GS improved the convergence properties of restarted GMRES and Bi-CGSTAB, with and without the ILUT preconditioner. GS was also shown to improve the distribution of the eigenvalues by reducing their concentration around the origin very significantly.