Do you want to publish a course? Click here

Simple and practical algorithms for $ell_p$-norm low-rank approximation

227   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We propose practical algorithms for entrywise $ell_p$-norm low-rank approximation, for $p = 1$ or $p = infty$. The proposed framework, which is non-convex and gradient-based, is easy to implement and typically attains better approximations, faster, than state of the art. From a theoretical standpoint, we show that the proposed scheme can attain $(1 + varepsilon)$-OPT approximations. Our algorithms are not hyperparameter-free: they achieve the desiderata only assuming algorithms hyperparameters are known a priori---or are at least approximable. I.e., our theory indicates what problem quantities need to be known, in order to get a good solution within polynomial time, and does not contradict to recent inapproximabilty results, as in [46].



rate research

Read More

A number of recent works have studied algorithms for entrywise $ell_p$-low rank approximation, namely, algorithms which given an $n times d$ matrix $A$ (with $n geq d$), output a rank-$k$ matrix $B$ minimizing $|A-B|_p^p=sum_{i,j}|A_{i,j}-B_{i,j}|^p$ when $p > 0$; and $|A-B|_0=sum_{i,j}[A_{i,j} eq B_{i,j}]$ for $p=0$. On the algorithmic side, for $p in (0,2)$, we give the first $(1+epsilon)$-approximation algorithm running in time $n^{text{poly}(k/epsilon)}$. Further, for $p = 0$, we give the first almost-linear time approximation scheme for what we call the Generalized Binary $ell_0$-Rank-$k$ problem. Our algorithm computes $(1+epsilon)$-approximation in time $(1/epsilon)^{2^{O(k)}/epsilon^{2}} cdot nd^{1+o(1)}$. On the hardness of approximation side, for $p in (1,2)$, assuming the Small Set Expansion Hypothesis and the Exponential Time Hypothesis (ETH), we show that there exists $delta := delta(alpha) > 0$ such that the entrywise $ell_p$-Rank-$k$ problem has no $alpha$-approximation algorithm running in time $2^{k^{delta}}$.
82 - Yifei Shen , Ye Xue , Jun Zhang 2020
Dictionary learning is a classic representation learning method that has been widely applied in signal processing and data analytics. In this paper, we investigate a family of $ell_p$-norm ($p>2,p in mathbb{N}$) maximization approaches for the complete dictionary learning problem from theoretical and algorithmic aspects. Specifically, we prove that the global maximizers of these formulations are very close to the true dictionary with high probability, even when Gaussian noise is present. Based on the generalized power method (GPM), an efficient algorithm is then developed for the $ell_p$-based formulations. We further show the efficacy of the developed algorithm: for the population GPM algorithm over the sphere constraint, it first quickly enters the neighborhood of a global maximizer, and then converges linearly in this region. Extensive experiments will demonstrate that the $ell_p$-based approaches enjoy a higher computational efficiency and better robustness than conventional approaches and $p=3$ performs the best.
Sparse tensor best rank-1 approximation (BR1Approx), which is a sparsity generalization of the dense tensor BR1Approx, and is a higher-order extension of the sparse matrix BR1Approx, is one of the most important problems in sparse tensor decomposition and related problems arising from statistics and machine learning. By exploiting the multilinearity as well as the sparsity structure of the problem, four approximation algorithms are proposed, which are easily implemented, of low computational complexity, and can serve as initial procedures for iterative algorithms. In addition, theoretically guaranteed worst-case approximation lower bounds are proved for all the algorithms. We provide numerical experiments on synthetic and real data to illustrate the effectiveness of the proposed algorithms.
We consider $ell_1$-Rank-$r$ Approximation over GF(2), where for a binary $mtimes n$ matrix ${bf A}$ and a positive integer $r$, one seeks a binary matrix ${bf B}$ of rank at most $r$, minimizing the column-sum norm $||{bf A} -{bf B}||_1$. We show that for every $varepsilonin (0, 1)$, there is a randomized $(1+varepsilon)$-approximation algorithm for $ell_1$-Rank-$r$ Approximation over GF(2) of running time $m^{O(1)}n^{O(2^{4r}cdot varepsilon^{-4})}$. This is the first polynomial time approximation scheme (PTAS) for this problem.
376 - Linjian Ma , Chao Yang 2021
Simulating quantum algorithms on classical computers is challenging when the system size, i.e., the number of qubits used in the quantum algorithm, is moderately large. However, some quantum algorithms and the corresponding quantum circuits can be simulated efficiently on a classical computer if the input quantum state is a low-rank tensor and all intermediate states of the quantum algorithm can be represented or approximated by low-rank tensors. In this paper, we examine the possibility of simulating a few quantum algorithms by using low-rank canonical polyadic (CP) decomposition to represent the input and all intermediate states of these algorithms. Two rank reduction algorithms are used to enable efficient simulation. We show that some of the algorithms preserve the low-rank structure of the input state and can thus be efficiently simulated on a classical computer. However, the rank of the intermediate states in other quantum algorithms can increase rapidly, making efficient simulation more difficult. To some extent, such difficulty reflects the advantage or superiority of a quantum computer over a classical computer. As a result, understanding the low-rank structure of a quantum algorithm allows us to identify algorithms that can benefit significantly from quantum computers.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا