ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal incorporation of sparsity information by weighted $ell_1$ optimization

32   0   0.0 ( 0 )
 نشر من قبل Jack Raymond
 تاريخ النشر 2010
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Compressed sensing of sparse sources can be improved by incorporating prior knowledge of the source. In this paper we demonstrate a method for optimal selection of weights in weighted $L_1$ norm minimization for a noiseless reconstruction model, and show the improvements in compression that can be achieved.

قيم البحث

اقرأ أيضاً

Matrix sensing is the problem of reconstructing a low-rank matrix from a few linear measurements. In many applications such as collaborative filtering, the famous Netflix prize problem, and seismic data interpolation, there exists some prior informat ion about the column and row spaces of the ground-truth low-rank matrix. In this paper, we exploit this prior information by proposing a weighted optimization problem where its objective function promotes both rank and prior subspace information. Using the recent results in conic integral geometry, we obtain the unique optimal weights that minimize the required number of measurements. As simulation results confirm, the proposed convex program with optimal weights requires substantially fewer measurements than the regular nuclear norm minimization.
62 - Wei Heng Bay , Eric Price , 2020
In this paper, we consider the problem of noiseless non-adaptive probabilistic group testing, in which the goal is high-probability recovery of the defective set. We show that in the case of $n$ items among which $k$ are defective, the smallest possi ble number of tests equals $min{ C_{k,n} k log n, n}$ up to lower-order asymptotic terms, where $C_{k,n}$ is a uniformly bounded constant (varying depending on the scaling of $k$ with respect to $n$) with a simple explicit expression. The algorithmic upper bound follows from a minor adaptation of an existing analysis of the Definite Defectives (DD) algorithm, and the algorithm-independent lower bound builds on existing works for the regimes $k le n^{1-Omega(1)}$ and $k = Theta(n)$. In sufficiently sparse regimes (including $k = obig( frac{n}{log n} big)$), our main result generalizes that of Coja-Oghlan {em et al.} (2020) by avoiding the assumption $k le n^{1-Omega(1)}$, whereas in sufficiently dense regimes (including $k = omegabig( frac{n}{log n} big)$), our main result shows that individual testing is asymptotically optimal for any non-zero target success probability, thus strengthening an existing result of Aldridge (2019) in terms of both the error probability and the assumed scaling of $k$.
Sparse Principal Component Analysis (PCA) is a dimensionality reduction technique wherein one seeks a low-rank representation of a data matrix with additional sparsity constraints on the obtained representation. We consider two probabilistic formulat ions of sparse PCA: a spiked Wigner and spiked Wishart (or spiked covariance) model. We analyze an Approximate Message Passing (AMP) algorithm to estimate the underlying signal and show, in the high dimensional limit, that the AMP estimates are information-theoretically optimal. As an immediate corollary, our results demonstrate that the posterior expectation of the underlying signal, which is often intractable to compute, can be obtained using a polynomial-time scheme. Our results also effectively provide a single-letter characterization of the sparse PCA problem.
In this paper, we study a support set reconstruction problem in which the signals of interest are jointly sparse with a common support set, and sampled by joint sparsity model-2 (JSM-2) in the presence of noise. Using mathematical tools, we develop u pper and lower bounds on the failure probability of support set reconstruction in terms of the sparsity, the ambient dimension, the minimum signal to noise ratio, the number of measurement vectors and the number of measurements. These bounds can be used to provide a guideline to determine the system parameters in various applications of compressed sensing with noisy JSM-2. Based on the bounds, we develop necessary and sufficient conditions for reliable support set reconstruction. We interpret these conditions to give theoretical explanations about the benefits enabled by joint sparsity structure in noisy JSM-2. We compare our sufficient condition with the existing result of noisy multiple measurement vectors model (MMV). As a result, we show that noisy JSM-2 may require less number of measurements than noisy MMV for reliable support set reconstruction.
Channel matrix sparsification is considered as a promising approach to reduce the progressing complexity in large-scale cloud-radio access networks (C-RANs) based on ideal channel condition assumption. In this paper, the research of channel sparsific ation is extend to practical scenarios, in which the perfect channel state information (CSI) is not available. First, a tractable lower bound of signal-to-interferenceplus-noise ratio (SINR) fidelity, which is defined as a ratio of SINRs with and without channel sparsification, is derived to evaluate the impact of channel estimation error. Based on the theoretical results, a Dinkelbach-based algorithm is proposed to achieve the global optimal performance of channel matrix sparsification based on the criterion of distance. Finally, all these results are extended to a more challenging scenario with pilot contamination. Finally, simulation results are shown to evaluate the performance of channel matrix sparsification with imperfect CSIs and verify our analytical results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا