Do you want to publish a course? Click here

Robust discretization in quantized tensor train format for elliptic problems in two dimensions

115   0   0.0 ( 0 )
 Added by Andrei Chertkov
 Publication date 2016
  fields
and research's language is English




Ask ChatGPT about the research

In this work we propose an efficient black-box solver for two-dimensional stationary diffusion equations, which is based on a new robust discretization scheme. The idea is to formulate an equation in a certain form without derivatives with a non-local stencil, which leads us to a linear system of equations with dense matrix. This matrix and a right-hand side are represented in a low-rank parametric representation -- the quantized tensor train (QTT-) format, and then all operations are performed with logarithmic complexity and memory consumption. Hence very fine grids can be used, and very accurate solutions with extremely high spatial resolution can be obtained. Numerical experiments show that this formulation gives accurate results and can be used up to $2^{60}$ grid points with no problems with conditioning, while total computational time is around several seconds.



rate research

Read More

Homogenization in terms of multiscale limits transforms a multiscale problem with $n+1$ asymptotically separated microscales posed on a physical domain $D subset mathbb{R}^d$ into a one-scale problem posed on a product domain of dimension $(n+1)d$ by introducing $n$ so-called fast variables. This procedure allows to convert $n+1$ scales in $d$ physical dimensions into a single-scale structure in $(n+1)d$ dimensions. We prove here that both the original, physical multiscale problem and the corresponding high-dimensional, one-scale limiting problem can be efficiently treated numerically with the recently developed quantized tensor-train finite-element method (QTT-FEM). The method is based on restricting computation to sequences of nested subspaces of low dimensions (which are called tensor ranks) within a vast but generic virtual (background) discretization space. In the course of computation, these subspaces are computed iteratively and data-adaptively at runtime, bypassing any offline precomputation. For the purpose of theoretical analysis, such low-dimensional subspaces are constructed analytically to bound the tensor ranks vs. error $tau>0$. We consider a model linear elliptic multiscale problem in several physical dimensions and show, theoretically and experimentally, that both (i) the solution of the associated high-dimensional one-scale problem and (ii) the corresponding approximation to the solution of the multiscale problem admit efficient approximation by the QTT-FEM. These problems can therefore be numerically solved in a scale-robust fashion by standard (low-order) PDE discretizations combined with state-of-the-art general-purpose solvers for tensor-structured linear systems. We prove scale-robust exponential convergence, i.e., that QTT-FEM achieves accuracy $tau$ with the number of effective degrees of freedom scaling polynomially in $log tau$.
Combination of low-tensor rank techniques and the Fast Fourier transform (FFT) based methods had turned out to be prominent in accelerating various statistical operations such as Kriging, computing conditional covariance, geostatistical optimal design, and others. However, the approximation of a full tensor by its low-rank format can be computationally formidable. In this work, we incorporate the robust Tensor Train (TT) approximation of covariance matrices and the efficient TT-Cross algorithm into the FFT-based Kriging. It is shown that here the computational complexity of Kriging is reduced to $mathcal{O}(d r^3 n)$, where $n$ is the mode size of the estimation grid, $d$ is the number of variables (the dimension), and $r$ is the rank of the TT approximation of the covariance matrix. For many popular covariance functions the TT rank $r$ remains stable for increasing $n$ and $d$. The advantages of this approach against those using plain FFT are demonstrated in synthetic and real data examples.
A linear PDE problem for randomly perturbed domains is considered in an adaptive Galerkin framework. The perturbation of the domains boundary is described by a vector valued random field depending on a countable number of random variables in an affine way. The corresponding Karhunen-Lo`eve expansion is approximated by the pivoted Cholesky decomposition based on a prescribed covariance function. The examined high-dimensional Galerkin system follows from the domain mapping approach, transferring the randomness from the domain to the diffusion coefficient and the forcing. In order to make this computationally feasible, the representation makes use of the modern tensor train format for the implicit compression of the problem. Moreover, an a posteriori error estimator is presented, which allows for the problem-dependent iterative refinement of all discretization parameters and the assessment of the achieved error reduction. The proposed approach is demonstrated in numerical benchmark problems.
Low-rank tensors are an established framework for high-dimensional least-squares problems. We propose to extend this framework by including the concept of block-sparsity. In the context of polynomial regression each sparsity pattern corresponds to some subspace of homogeneous multivariate polynomials. This allows us to adapt the ansatz space to align better with known sample complexity results. The resulting method is tested in numerical experiments and demonstrates improved computational resource utilization and sample efficiency.
A symmetric and a nonsymmetric variant of the additive Schwarz preconditioner are proposed for the solution of a nonsymmetric system of algebraic equations arising from a general finite volume element discretization of symmetric elliptic problems with large jumps in the entries of the coefficient matrices across subdomains. It is shown in the analysis, that the convergence of the preconditioned GMRES iteration with the proposed preconditioners, depends polylogarithmically on the mesh parameters, in other words, the convergence is only weakly dependent on the mesh parameters, and it is robust with respect to the jumps in the coefficients.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا