No Arabic abstract
The fundamental problem considered in this paper is What is the textit{energy} consumed for the implementation of a emph{compressive sensing} decoding algorithm on a circuit?. Using the information-friction framework, we examine the smallest amount of textit{bit-meters} as a measure for the energy consumed by a circuit. We derive a fundamental lower bound for the implementation of compressive sensing decoding algorithms on a circuit. In the setting where the number of measurements scales linearly with the sparsity and the sparsity is sub-linear with the length of the signal, we show that the textit{bit-meters} consumption for these algorithms is order-tight, i.e., it matches the lower bound asymptotically up to a constant factor. Our implementations yield interesting insights into design of energy-efficient circuits that are not captured by the notion of computational efficiency alone.
This paper considers solving the unconstrained $ell_q$-norm ($0leq q<1$) regularized least squares ($ell_q$-LS) problem for recovering sparse signals in compressive sensing. We propose two highly efficient first-order algorithms via incorporating the proximity operator for nonconvex $ell_q$-norm functions into the fast iterative shrinkage/thresholding (FISTA) and the alternative direction method of multipliers (ADMM) frameworks, respectively. Furthermore, in solving the nonconvex $ell_q$-LS problem, a sequential minimization strategy is adopted in the new algorithms to gain better global convergence performance. Unlike most existing $ell_q$-minimization algorithms, the new algorithms solve the $ell_q$-minimization problem without smoothing (approximating) the $ell_q$-norm. Meanwhile, the new algorithms scale well for large-scale problems, as often encountered in image processing. We show that the proposed algorithms are the fastest methods in solving the nonconvex $ell_q$-minimization problem, while offering competent performance in recovering sparse signals and compressible images compared with several state-of-the-art algorithms.
Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.
Compressive sensing has shown significant promise in biomedical fields. It reconstructs a signal from sub-Nyquist random linear measurements. Classical methods only exploit the sparsity in one domain. A lot of biomedical signals have additional structures, such as multi-sparsity in different domains, piecewise smoothness, low rank, etc. We propose a framework to exploit all the available structure information. A new convex programming problem is generated with multiple convex structure-inducing constraints and the linear measurement fitting constraint. With additional a priori information for solving the underdetermined system, the signal recovery performance can be improved. In numerical experiments, we compare the proposed method with classical methods. Both simulated data and real-life biomedical data are used. Results show that the newly proposed method achieves better reconstruction accuracy performance in term of both L1 and L2 errors.
A range of efficient wireless processes and enabling techniques are put under a magnifier glass in the quest for exploring different manifestations of correlated processes, where sub-Nyquist sampling may be invoked as an explicit benefit of having a
In most compressive sensing problems l1 norm is used during the signal reconstruction process. In this article the use of entropy functional is proposed to approximate the l1 norm. A modified version of the entropy functional is continuous, differentiable and convex. Therefore, it is possible to construct globally convergent iterative algorithms using Bregmans row action D-projection method for compressive sensing applications. Simulation examples are presented.