ترغب بنشر مسار تعليمي؟ اضغط هنا

Parametric PDEs: Sparse or Low-Rank Approximations?

265   0   0.0 ( 0 )
 نشر من قبل Markus Bachmayr
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider adaptive approximations of the parameter-to-solution map for elliptic operator equations depending on a large or infinite number of parameters, comparing approximation strategies of different degrees of nonlinearity: sparse polynomial expansions, general low-rank approximations separating spatial and parametric variables, and hierarchical tensor decompositions separating all variables. We describe corresponding adaptive algorithms based on a common generic template and show their near-optimality with respect to natural approximability assumptions for each type of approximation. A central ingredient in the resulting bounds for the total computational complexity are new operator compression results for the case of infinitely many parameters. We conclude with a comparison of the complexity estimates based on the actual approximability properties of classes of parametric model problems, which shows that the computational costs of optimized low-rank expansions can be significantly lower or higher than those of sparse polynomial expansions, depending on the particular type of parametric problem.

قيم البحث

اقرأ أيضاً

Relying on the classical connection between Backward Stochastic Differential Equations (BSDEs) and non-linear parabolic partial differential equations (PDEs), we propose a new probabilistic learning scheme for solving high-dimensional semi-linear par abolic PDEs. This scheme is inspired by the approach coming from machine learning and developed using deep neural networks in Han and al. [32]. Our algorithm is based on a Picard iteration scheme in which a sequence of linear-quadratic optimisation problem is solved by means of stochastic gradient descent (SGD) algorithm. In the framework of a linear specification of the approximation space, we manage to prove a convergence result for our scheme, under some smallness condition. In practice, in order to be able to treat high-dimensional examples, we employ sparse grid approximation spaces. In the case of periodic coefficients and using pre-wavelet basis functions, we obtain an upper bound on the global complexity of our method. It shows in particular that the curse of dimensionality is tamed in the sense that in order to achieve a root mean squared error of order ${epsilon}$, for a prescribed precision ${epsilon}$, the complexity of the Picard algorithm grows polynomially in ${epsilon}^{-1}$ up to some logarithmic factor $ |log({epsilon})| $ which grows linearly with respect to the PDE dimension. Various numerical results are presented to validate the performance of our method and to compare them with some recent machine learning schemes proposed in Han and al. [20] and Hure and al. [37].
The existence and uniqueness of weak solutions to dynamical low-rank evolution problems for parabolic partial differential equations in two spatial dimensions is shown, covering also non-diagonal diffusion in the elliptic part. The proof is based on a variational time-stepping scheme on the low-rank manifold. Moreover, this scheme is shown to be closely related to practical methods for computing such low-rank evolutions.
127 - Jared Tanner , Simon Vary 2020
Expressing a matrix as the sum of a low-rank matrix plus a sparse matrix is a flexible model capturing global and local features in data. This model is the foundation of robust principle component analysis (Candes et al., 2011) (Chandrasekaran et al. , 2009), and popularized by dynamic-foreground/static-background separation (Bouwmans et al., 2016) amongst other applications. Compressed sensing, matrix completion, and their variants (Eldar and Kutyniok, 2012) (Foucart and Rauhut, 2013) have established that data satisfying low complexity models can be efficiently measured and recovered from a number of measurements proportional to the model complexity rather than the ambient dimension. This manuscript develops similar guarantees showing that $mtimes n$ matrices that can be expressed as the sum of a rank-$r$ matrix and a $s$-sparse matrix can be recovered by computationally tractable methods from $mathcal{O}(r(m+n-r)+s)log(mn/s)$ linear measurements. More specifically, we establish that the restricted isometry constants for the aforementioned matrices remain bounded independent of problem size provided $p/mn$, $s/p$, and $r(m+n-r)/p$ reman fixed. Additionally, we show that semidefinite programming and two hard threshold gradient descent algorithms, NIHT and NAHT, converge to the measured matrix provided the measurement operators RICs are sufficiently small. Numerical experiments illustrating these results are shown for synthetic problems, dynamic-foreground/static-background separation, and multispectral imaging.
We use the ideas of goal-oriented error estimation and adaptivity to design and implement an efficient adaptive algorithm for approximating linear quantities of interest derived from solutions to elliptic partial differential equations (PDEs) with pa rametric or uncertain inputs. In the algorithm, the stochastic Galerkin finite element method (sGFEM) is used to approximate the solutions to primal and dual problems that depend on a countably infinite number of uncertain parameters. Adaptive refinement is guided by an innovative strategy that combines the error reduction indicators computed for spatial and parametric components of the primal and dual solutions. The key theoretical ingredient is a novel two-level a posteriori estimate of the energy error in sGFEM approximations. We prove that this error estimate is reliable and efficient. The effectiveness of the goal-oriented error estimation strategy and the performance of the goal-oriented adaptive algorithm are tested numerically for three representative model problems with parametric coefficients and for three quantities of interest (including the approximation of pointwise values).
The challenge of mastering computational tasks of enormous size tends to frequently override questioning the quality of the numerical outcome in terms of accuracy. By this we do not mean the accuracy within the discrete setting, which itself may also be far from evident for ill-conditioned problems or when iterative solvers are involved. By accuracy-controlled computation we mean the deviation of the numerical approximation from the exact solution of an underlying continuous problem in a relevant metric, which has been the initiating interest in the first place. Can the accuracy of a numerical result be rigorously certified - a question that is particularly important in the context of uncertainty quantification, when many possible sources of uncertainties interact. This is the guiding question throughout this article, which reviews recent developments of low-rank approximation methods for problems in high spatial dimensions. In particular, we highlight the role of adaptivity when dealing with such strongly nonlinear methods that integrate in a natural way issues of discrete and continuous accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا