ترغب بنشر مسار تعليمي؟ اضغط هنا

Accelerating Alternating Least Squares for Tensor Decomposition by Pairwise Perturbation

120   0   0.0 ( 0 )
 نشر من قبل Linjian Ma
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The alternating least squares algorithm for CP and Tucker decomposition is dominated in cost by the tensor contractions necessary to set up the quadratic optimization subproblems. We introduce a novel family of algorithms that uses perturbative corrections to the subproblems rather than recomputing the tensor contractions. This approximation is accurate when the factor matrices are changing little across iterations, which occurs when alternating least squares approaches convergence. We provide a theoretical analysis to bound the approximation error. Our numerical experiments demonstrate that the proposed pairwise perturbation algorithms are easy to control and converge to minima that are as good as alternating least squares. The experimental results show improvements of up to 3.1X with respect to state-of-the-art alternating least squares approaches for various model tensor problems and real datasets.



قيم البحث

اقرأ أيضاً

In this work, we study the tensor ring decomposition and its associated numerical algorithms. We establish a sharp transition of algorithmic difficulty of the optimization problem as the bond dimension increases: On one hand, we show the existence of spurious local minima for the optimization landscape even when the tensor ring format is much over-parameterized, i.e., with bond dimension much larger than that of the true target tensor. On the other hand, when the bond dimension is further increased, we establish one-loop convergence for alternating least square algorithm for tensor ring decomposition. The theoretical results are complemented by numerical experiments for both local minimum and one-loop convergence for the alternating least square algorithm.
Alternating least squares is the most widely used algorithm for CP tensor decomposition. However, alternating least squares may exhibit slow or no convergence, especially when high accuracy is required. An alternative approach is to regard CP decompo sition as a nonlinear least squares problem and employ Newton-like methods. Direct solution of linear systems involving an approximated Hessian is generally expensive. However, recent advancements have shown that use of an implicit representation of the linear system makes these methods competitive with alternating least squares. We provide the first parallel implementation of a Gauss-Newton method for CP decomposition, which iteratively solves linear least squares problems at each Gauss-Newton step. In particular, we leverage a formulation that employs tensor contractions for implicit matrix-vector products within the conjugate gradient method. The use of tensor contractions enables us to employ the Cyclops library for distributed-memory tensor computations to parallelize the Gauss-Newton approach with a high-level Python implementation. In addition, we propose a regularization scheme for Gauss-Newton method to improve convergence properties without any additional cost. We study the convergence of variants of the Gauss-Newton method relative to ALS for finding exact CP decompositions as well as approximate decompositions of real-world tensors. We evaluate the performance of sequential and parall
81 - Yuning Yang 2019
The epsilon alternating least squares ($epsilon$-ALS) is developed and analyzed for canonical polyadic decomposition (approximation) of a higher-order tensor where one or more of the factor matrices are assumed to be columnwisely orthonormal. It is s hown that the algorithm globally converges to a KKT point for all tensors without any assumption. For the original ALS, by further studying the properties of the polar decomposition, we also establish its global convergence under a reality assumption not stronger than those in the literature. These results completely address a question concerning the global convergence raised in [L. Wang, M. T. Chu and B. Yu, emph{SIAM J. Matrix Anal. Appl.}, 36 (2015), pp. 1--19]. In addition, an initialization procedure is proposed, which possesses a provable lower bound when the number of columnwisely orthonormal factors is one. Armed with this initialization procedure, numerical experiments show that the $epsilon$-ALS exhibits a promising performance in terms of efficiency and effectiveness.
We consider best approximation problems in a nonlinear subset $mathcal{M}$ of a Banach space of functions $(mathcal{V},|bullet|)$. The norm is assumed to be a generalization of the $L^2$-norm for which only a weighted Monte Carlo estimate $|bullet|_n $ can be computed. The objective is to obtain an approximation $vinmathcal{M}$ of an unknown function $u in mathcal{V}$ by minimizing the empirical norm $|u-v|_n$. We consider this problem for general nonlinear subsets and establish error bounds for the empirical best approximation error. Our results are based on a restricted isometry property (RIP) which holds in probability and is independent of the nonlinear least squares setting. Several model classes are examined where analytical statements can be made about the RIP and the results are compared to existing sample complexity bounds from the literature. We find that for well-studied model classes our general bound is weaker but exhibits many of the same properties as these specialized bounds. Notably, we demonstrate the advantage of an optimal sampling density (as known for linear spaces) for sets of functions with sparse representations.
There are plenty of applications and analysis for time-independent elliptic partial differential equations in the literature hinting at the benefits of overtesting by using more collocation conditions than the number of basis functions. Overtesting n ot only reduces the problem size, but is also known to be necessary for stability and convergence of widely used unsymmetric Kansa-type strong-form collocation methods. We consider kernel-based meshfree methods, which is a method of lines with collocation and overtesting spatially, for solving parabolic partial differential equations on surfaces without parametrization. In this paper, we extend the time-independent convergence theories for overtesting techniques to the parabolic equations on smooth and closed surfaces.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا