Do you want to publish a course? Click here

Jointly Low-Rank and Bisparse Recovery: Questions and Partial Answers

128   0   0.0 ( 0 )
 Added by Laurent Jacques
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

We investigate the problem of recovering jointly $r$-rank and $s$-bisparse matrices from as few linear measurements as possible, considering arbitrary measurements as well as rank-one measurements. In both cases, we show that $m asymp r s ln(en/s)$ measurements make the recovery possible in theory, meaning via a nonpractical algorithm. In case of arbitrary measurements, we investigate the possibility of achieving practical recovery via an iterative-hard-thresholding algorithm when $m asymp r s^gamma ln(en/s)$ for some exponent $gamma > 0$. We show that this is feasible for $gamma = 2$, and that the proposed analysis cannot cover the case $gamma leq 1$. The precise value of the optimal exponent $gamma in [1,2]$ is the object of a question, raised but unresolved in this paper, about head projections for the jointly low-rank and bisparse structure. Some related questions are partially answered in passing. For rank-one measurements, we suggest on arcane grounds an iterative-hard-thresholding algorithm modified to exploit the nonstandard restricted isometry property obeyed by this type of measurements.



rate research

Read More

This note studies the worst-case recovery error of low-rank and bisparse matrices as a function of the number of one-bit measurements used to acquire them. First, by way of the concept of consistency width, precise estimates are given on how fast the recovery error can in theory decay. Next, an idealized recovery method is proved to reach the fourth-root of the optimal decay rate for Gaussian sensing schemes. This idealized method being impractical, an implementable recovery algorithm is finally proposed in the context of factorized Gaussian sensing schemes. It is shown to provide a recovery error decaying as the sixth-root of the optimal rate.
Low-rank tensor recovery problems have been widely studied in many applications of signal processing and machine learning. Tucker decomposition is known as one of the most popular decompositions in the tensor framework. In recent years, researchers have developed many state-of-the-art algorithms to address the problem of low-Tucker-rank tensor recovery. Motivated by the favorable properties of the stochastic algorithms, such as stochastic gradient descent and stochastic iterative hard thresholding, we aim to extend the well-known stochastic iterative hard thresholding algorithm to the tensor framework in order to address the problem of recovering a low-Tucker-rank tensor from its linear measurements. We have also developed linear convergence analysis for the proposed method and conducted a series of experiments with both synthetic and real data to illustrate the performance of the proposed method.
Information is extracted from large and sparse data sets organized as 3-mode tensors. Two methods are described, based on best rank-(2,2,2) and rank-(2,2,1) approximation of the tensor. The first method can be considered as a generalization of spectral graph partitioning to tensors, and it gives a reordering of the tensor that clusters the information. The second method gives an expansion of the tensor in sparse rank-(2,2,1) terms, where the terms correspond to graphs. The low-rank approximations are computed using an efficient Krylov-Schur type algorithm that avoids filling in the sparse data. The methods are applied to topic search in news text, a tensor representing conference author-terms-years, and network traffic logs.
This paper introduces new solvers for the computation of low-rank approximate solutions to large-scale linear problems, with a particular focus on the regularization of linear inverse problems. Although Krylov methods incorporating explicit projections onto low-rank subspaces are already used for well-posed systems that arise from discretizing stochastic or time-dependent PDEs, we are mainly concerned with algorithms that solve the so-called nuclear norm regularized problem, where a suitable nuclear norm penalization on the solution is imposed alongside a fit-to-data term expressed in the 2-norm: this has the effect of implicitly enforcing low-rank solutions. By adopting an iteratively reweighted norm approach, the nuclear norm regularized problem is reformulated as a sequence of quadratic problems, which can then be efficiently solved using Krylov methods, giving rise to an inner-outer iteration scheme. Our approach differs from the other solvers available in the literature in that: (a) Kronecker product properties are exploited to define the reweighted 2-norm penalization terms; (b) efficient preconditioned Krylov methods replace gradient (projection) methods; (c) the regularization parameter can be efficiently and adaptively set along the iterations. Furthermore, we reformulate within the framework of flexible Krylov methods both the new inner-outer methods for nuclear norm regularization and some of the existing Krylov methods incorporating low-rank projections. This results in an even more computationally efficient (but heuristic) strategy, that does not rely on an inner-outer iteration scheme. Numerical experiments show that our new solvers are competitive with other state-of-the-art solvers for low-rank problems, and deliver reconstructions of increased quality with respect to other classical Krylov methods.
We describe a simple, black-box compression format for tensors with a multiscale structure. By representing the tensor as a sum of compressed tensors defined on increasingly coarse grids, we capture low-rank structures on each grid-scale, and we show how this leads to an increase in compression for a fixed accuracy. We devise an alternating algorithm to represent a given tensor in the multiresolution format and prove local convergence guarantees. In two dimensions, we provide examples that show that this approach can beat the Eckart-Young theorem, and for dimensions higher than two, we achieve higher compression than the tensor-train format on six real-world datasets. We also provide results on the closedness and stability of the tensor format and discuss how to perform common linear algebra operations on the level of the compressed tensors.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا