Do you want to publish a course? Click here

Estimation of low-rank tensors via convex optimization

278   0   0.0 ( 0 )
 Added by Ryota Tomioka
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

In this paper, we propose three approaches for the estimation of the Tucker decomposition of multi-way arrays (tensors) from partial observations. All approaches are formulated as convex minimization problems. Therefore, the minimum is guaranteed to be unique. The proposed approaches can automatically estimate the number of factors (rank) through the optimization. Thus, there is no need to specify the rank beforehand. The key technique we employ is the trace norm regularization, which is a popular approach for the estimation of low-rank matrices. In addition, we propose a simple heuristic to improve the interpretability of the obtained factorization. The advantages and disadvantages of three proposed approaches are demonstrated through numerical experiments on both synthetic and real world datasets. We show that the proposed convex optimization based approaches are more accurate in predictive performance, faster, and more reliable in recovering a known multilinear structure than conventional approaches.



rate research

Read More

143 - M. Journee , F. Bach , P.-A. Absil 2008
We propose an algorithm for solving nonlinear convex programs defined in terms of a symmetric positive semidefinite matrix variable $X$. This algorithm rests on the factorization $X=Y Y^T$, where the number of columns of Y fixes the rank of $X$. It is thus very effective for solving programs that have a low rank solution. The factorization $X=Y Y^T$ evokes a reformulation of the original problem as an optimization on a particular quotient manifold. The present paper discusses the geometry of that manifold and derives a second order optimization method. It furthermore provides some conditions on the rank of the factorization to ensure equivalence with the original problem. The efficiency of the proposed algorithm is illustrated on two applications: the maximal cut of a graph and the sparse principal component analysis problem.
The density matrices are positively semi-definite Hermitian matrices of unit trace that describe the state of a quantum system. The goal of the paper is to develop minimax lower bounds on error rates of estimation of low rank density matrices in trace regression models used in quantum state tomography (in particular, in the case of Pauli measurements) with explicit dependence of the bounds on the rank and other complexity parameters. Such bounds are established for several statistically relevant distances, including quant
This work considers two popular minimization problems: (i) the minimization of a general convex function $f(mathbf{X})$ with the domain being positive semi-definite matrices; (ii) the minimization of a general convex function $f(mathbf{X})$ regularized by the matrix nuclear norm $|mathbf{X}|_*$ with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer and Monteiro to factor the low-rank variable $mathbf{X} = mathbf{U}mathbf{U}^top $ (for semi-definite matrices) or $mathbf{X}=mathbf{U}mathbf{V}^top $ (for general matrices) and also replace the nuclear norm $|mathbf{X}|_*$ with $(|mathbf{U}|_F^2+|mathbf{V}|_F^2)/2$. In spite of the non-convexity of the resulting factored formulations, we prove that each critical point either corresponds to the global optimum of the original convex problems or is a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a nice geometric structure of the factored formulations allows many local search algorithms to find a global optimizer even with random initializations.
141 - Yuetian Luo , Anru R. Zhang 2021
In this paper, we consider the estimation of a low Tucker rank tensor from a number of noisy linear measurements. The general problem covers many specific examples arising from applications, including tensor regression, tensor completion, and tensor PCA/SVD. We propose a Riemannian Gauss-Newton (RGN) method with fast implementations for low Tucker rank tensor estimation. Different from the generic (super)linear convergence guarantee of RGN in the literature, we prove the first quadratic convergence guarantee of RGN for low-rank tensor estimation under some mild conditions. A deterministic estimation error lower bound, which matches the upper bound, is provided that demonstrates the statistical optimality of RGN. The merit of RGN is illustrated through two machine learning applications: tensor regression and tensor SVD. Finally, we provide the simulation results to corroborate our theoretical findings.
Various problems in data analysis and statistical genetics call for recovery of a column-sparse, low-rank matrix from noisy observations. We propose ReFACTor, a simple variation of the classical Truncated Singular Value Decomposition (TSVD) algorithm. In contrast to previous sparse principal component analysis (PCA) algorithms, our algorithm can provably reveal a low-rank signal matrix better, and often significantly better, than the widely used TSVD, making it the algorithm of choice whenever column-sparsity is suspected. Empirically, we observe that ReFACTor consistently outperforms TSVD even when the underlying signal is not sparse, suggesting that it is generally safe to use ReFACTor instead of TSVD and PCA. The algorithm is extremely simple to implement and its running time is dominated by the runtime of PCA, making it as practical as standard principal component analysis.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا