Do you want to publish a course? Click here

Multiscale Elliptic PDEs Upscaling and Function Approximation via Subsampled Data

128   0   0.0 ( 0 )
 Added by Yifan Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

There is an intimate connection between numerical upscaling of multiscale PDEs and scattered data approximation of heterogeneous functions: the coarse variables selected for deriving an upscaled equation (in the former) correspond to the sampled information used for approximation (in the latter). As such, both problems can be thought of as recovering a target function based on some coarse data that are either artificially chosen by an upscaling algorithm, or determined by some physical measurement process. The purpose of this paper is then to study that, under such a setup and for a specific elliptic problem, how the lengthscale of the coarse data, which we refer to as the subsampled lengthscale, influences the accuracy of recovery, given limited computational budgets. Our analysis and experiments identify that, reducing the subsampling lengthscale may improve the accuracy, implying a guiding criterion for coarse-graining or data acquisition in this computationally constrained scenario, especially leading to direct insights for the implementation of the Gamblets method in the numerical homogenization literature. Moreover, reducing the lengthscale to zero may lead to a blow-up of approximation error if the target function does not have enough regularity, suggesting the need for a stronger prior assumption on the target function to be approximated. We introduce a singular weight function to deal with it, both theoretically and numerically. This work sheds light on the interplay of the lengthscale of coarse data, the computational costs, the regularity of the target function, and the accuracy of approximations and numerical simulations.



rate research

Read More

130 - Yifan Chen , Thomas Y. Hou 2019
Function approximation and recovery via some sampled data have long been studied in a wide array of applied mathematics and statistics fields. Analytic tools, such as the Poincare inequality, have been handy for estimating the approximation errors in different scales. The purpose of this paper is to study a generalized Poincar e inequality, where the measurement function is of subsampled type, with a small but non-zero lengthscale that will be made precise. Our analysis identifies this inequality as a basic tool for function recovery problems. We discuss and demonstrate the optimality of the inequality concerning the subsampled lengthscale, connecting it to existing results in the literature. In application to function approximation problems, the approximation accuracy using different basis functions and under different regularity assumptions is established by using the subsampled Poincare inequality. We observe that the error bound blows up as the subsampled lengthscale approaches zero, due to the fact that the underlying function is not regular enough to have well-defined pointwise values. A weighted version of the Poincar e inequality is proposed to address this problem; its optimality is also discussed.
In this paper, we introduce a multiscale framework based on adaptive edge basis functions to solve second-order linear elliptic PDEs with rough coefficients. One of the main results is that we prove the proposed multiscale method achieves nearly exponential convergence in the approximation error with respect to the computational degrees of freedom. Our strategy is to perform an energy orthogonal decomposition of the solution space into a coarse scale component comprising $a$-harmonic functions in each element of the mesh, and a fine scale component named the bubble part that can be computed locally and efficiently. The coarse scale component depends entirely on function values on edges. Our approximation on each edge is made in the Lions-Magenes space $H_{00}^{1/2}(e)$, which we will demonstrate to be a natural and powerful choice. We construct edge basis functions using local oversampling and singular value decomposition. When local information of the right-hand side is adaptively incorporated into the edge basis functions, we prove a nearly exponential convergence rate of the approximation error. Numerical experiments validate and extend our theoretical analysis; in particular, we observe no obvious degradation in accuracy for high-contrast media problems.
This paper studies numerical methods for the approximation of elliptic PDEs with lognormal coefficients of the form $-{rm div}(a abla u)=f$ where $a=exp(b)$ and $b$ is a Gaussian random field. The approximant of the solution $u$ is an $n$-term polynomial expansion in the scalar Gaussian random variables that parametrize $b$. We present a general convergence analysis of weighted least-squares approximants for smooth and arbitrarily rough random field, using a suitable random design, for which we prove optimality in the following sense: their convergence rate matches exactly or closely the rate that has been established in cite{BCDM} for best $n$-term approximation by Hermite polynomials, under the same minimial assumptions on the Gaussian random field. This is in contrast with the current state of the art results for the stochastic Galerkin method that suffers the lack of coercivity due to the lognormal nature of the diffusion field. Numerical tests with $b$ as the Brownian bridge confirm our theoretical findings.
119 - Min Liu , Zhiqiang Cai 2021
In this paper, we study adaptive neuron enhancement (ANE) method for solving self-adjoint second-order elliptic partial differential equations (PDEs). The ANE method is a self-adaptive method generating a two-layer spline NN and a numerical integration mesh such that the approximation accuracy is within the prescribed tolerance. Moreover, the ANE method provides a natural process for obtaining a good initialization which is crucial for training nonlinear optimization problem. The underlying PDE is discretized by the Ritz method using a two-layer spline neural network based on either the primal or dual formulations that minimize the respective energy or complimentary functionals. Essential boundary conditions are imposed weakly through the functionals with proper norms. It is proved that the Ritz approximation is the best approximation in the energy norm; moreover, effect of numerical integration for the Ritz approximation is analyzed as well. Two estimators for adaptive neuron enhancement method are introduced, one is the so-called recovery estimator and the other is the least-squares estimator. Finally, numerical results for diffusion problems with either corner or intersecting interface singularities are presented.
In this paper, we consider several possible ways to set up Heterogeneous Multiscale Methods for the Landau-Lifshitz equation with a highly oscillatory diffusion coefficient, which can be seen as a means to modeling rapidly varying ferromagnetic materials. We then prove estimates for the errors introduced when approximating the relevant quantity in each of the models given a periodic problem, using averaging in time and space of the solution to a corresponding micro problem. In our setup, the Landau-Lifshitz equation with highly oscillatory coefficient is chosen as the micro problem for all models. We then show that the averaging errors only depend on $varepsilon$, the size of the microscopic oscillations, as well as the size of the averaging domain in time and space and the choice of averaging kernels.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا