ترغب بنشر مسار تعليمي؟ اضغط هنا

Adaptive Low-Rank Methods: Problems on Sobolev Spaces

61   0   0.0 ( 0 )
 نشر من قبل Markus Bachmayr
 تاريخ النشر 2014
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper is concerned with the development and analysis of an iterative solver for high-dimensional second-order elliptic problems based on subspace-based low-rank tensor formats. Both the subspaces giving rise to low-rank approximations and corresponding sparse approximations of lower-dimensional tensor components are determined adaptively. A principal obstruction to a simultaneous control of rank growth and accuracy turns out to be the fact that the underlying elliptic operator is an isomorphism only between spaces that are not endowed with cross norms. Therefore, as central part of this scheme, we devise a method for preconditioning low-rank tensor representations of operators. Under standard assumptions on the data, we establish convergence to the solution of the continuous problem with a guaranteed error reduction. Moreover, for the case that the solution exhibits a certain low-rank structure and representation sparsity, we derive bounds on the computational complexity, including in particular bounds on the tensor ranks that can arise during the iteration. We emphasize that such assumptions on the solution do not enter in the formulation of the scheme, which in fact is shown to detect them automatically. Our findings are illustrated by numerical experiments that demonstrate the practical efficiency of the method in high spatial dimensions.

قيم البحث

اقرأ أيضاً

This paper introduces new solvers for the computation of low-rank approximate solutions to large-scale linear problems, with a particular focus on the regularization of linear inverse problems. Although Krylov methods incorporating explicit projectio ns onto low-rank subspaces are already used for well-posed systems that arise from discretizing stochastic or time-dependent PDEs, we are mainly concerned with algorithms that solve the so-called nuclear norm regularized problem, where a suitable nuclear norm penalization on the solution is imposed alongside a fit-to-data term expressed in the 2-norm: this has the effect of implicitly enforcing low-rank solutions. By adopting an iteratively reweighted norm approach, the nuclear norm regularized problem is reformulated as a sequence of quadratic problems, which can then be efficiently solved using Krylov methods, giving rise to an inner-outer iteration scheme. Our approach differs from the other solvers available in the literature in that: (a) Kronecker product properties are exploited to define the reweighted 2-norm penalization terms; (b) efficient preconditioned Krylov methods replace gradient (projection) methods; (c) the regularization parameter can be efficiently and adaptively set along the iterations. Furthermore, we reformulate within the framework of flexible Krylov methods both the new inner-outer methods for nuclear norm regularization and some of the existing Krylov methods incorporating low-rank projections. This results in an even more computationally efficient (but heuristic) strategy, that does not rely on an inner-outer iteration scheme. Numerical experiments show that our new solvers are competitive with other state-of-the-art solvers for low-rank problems, and deliver reconstructions of increased quality with respect to other classical Krylov methods.
The existence and uniqueness of weak solutions to dynamical low-rank evolution problems for parabolic partial differential equations in two spatial dimensions is shown, covering also non-diagonal diffusion in the elliptic part. The proof is based on a variational time-stepping scheme on the low-rank manifold. Moreover, this scheme is shown to be closely related to practical methods for computing such low-rank evolutions.
We present an analysis of the additive average Schwarz preconditioner with two newly proposed adaptively enriched coarse spaces which was presented at the 23rd International conference on domain decomposition methods in Korea, for solving second orde r elliptic problems with highly varying and discontinuous coefficients. It is shown that the condition number of the preconditioned system is bounded independently of the variations and the jumps in the coefficient, and depends linearly on the mesh parameter ratio H/h, that is the ratio between the subdomain size and the mesh size, thereby retaining the same optimality and scalablity of the original additive average Schwarz preconditioner.
We present a novel approach to the simulation of miscible displacement by employing adaptive enriched Galerkin finite element methods (EG) coupled with entropy residual stabilization for transport. In particular, numerical simulations of viscous fing ering instabilities in heterogeneous porous media and Hele-Shaw cells are illustrated. EG is formulated by enriching the conforming continuous Galerkin finite element method (CG) with piecewise constant functions. The method provides locally and globally conservative fluxes, which is crucial for coupled flow and transport problems. Moreover, EG has fewer degrees of freedom in comparison with discontinuous Galerkin (DG) and an efficient flow solver has been derived which allows for higher order schemes. Dynamic adaptive mesh refinement is applied in order to save computational cost for large-scale three dimensional applications. In addition, entropy residual based stabilization for high order EG transport systems prevents any spurious oscillations. Numerical tests are presented to show the capabilities of EG applied to flow and transport.
161 - Sean Hon , Haizhao Yang 2021
We establish in this work approximation results of deep neural networks for smooth functions measured in Sobolev norms, motivated by recent development of numerical solvers for partial differential equations using deep neural networks. The error boun ds are explicitly characterized in terms of both the width and depth of the networks simultaneously. Namely, for $fin C^s([0,1]^d)$, we show that deep ReLU networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ can achieve a non-asymptotic approximation rate of $mathcal{O}(N^{-2(s-1)/d}L^{-2(s-1)/d})$ with respect to the $mathcal{W}^{1,p}([0,1]^d)$ norm for $pin[1,infty)$. If either the ReLU function or its square is applied as activation functions to construct deep neural networks of width $mathcal{O}(Nlog{N})$ and of depth $mathcal{O}(Llog{L})$ to approximate $fin C^s([0,1]^d)$, the non-asymptotic approximation rate is $mathcal{O}(N^{-2(s-n)/d}L^{-2(s-n)/d})$ with respect to the $mathcal{W}^{n,p}([0,1]^d)$ norm for $pin[1,infty)$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا