ترغب بنشر مسار تعليمي؟ اضغط هنا

An Information Theoretic Study for Noisy Compressed Sensing With Joint Sparsity Model-2

133   0   0.0 ( 0 )
 نشر من قبل Sangjun Park
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study a support set reconstruction problem in which the signals of interest are jointly sparse with a common support set, and sampled by joint sparsity model-2 (JSM-2) in the presence of noise. Using mathematical tools, we develop upper and lower bounds on the failure probability of support set reconstruction in terms of the sparsity, the ambient dimension, the minimum signal to noise ratio, the number of measurement vectors and the number of measurements. These bounds can be used to provide a guideline to determine the system parameters in various applications of compressed sensing with noisy JSM-2. Based on the bounds, we develop necessary and sufficient conditions for reliable support set reconstruction. We interpret these conditions to give theoretical explanations about the benefits enabled by joint sparsity structure in noisy JSM-2. We compare our sufficient condition with the existing result of noisy multiple measurement vectors model (MMV). As a result, we show that noisy JSM-2 may require less number of measurements than noisy MMV for reliable support set reconstruction.



قيم البحث

اقرأ أيضاً

A communication setup is considered where a transmitter wishes to convey a message to a receiver and simultaneously estimate the state of that receiver through a common waveform. The state is estimated at the transmitter by means of generalized feedb ack, i.e., a strictly causal channel output, and the known waveform. The scenario at hand is motivated by joint radar and communication, which aims to co-design radar sensing and communication over shared spectrum and hardware. For the case of memoryless single receiver channels with i.i.d. time-varying state sequences, we fully characterize the capacity-distortion tradeoff, defined as the largest achievable rate below which a message can be conveyed reliably while satisfying some distortion constraints on state sensing. We propose a numerical method to compute the optimal input that achieves the capacity-distortion tradeoff. Then, we address memoryless state-dependent broadcast channels (BCs). For physically degraded BCs with i.i.d. time-varying state sequences, we characterize the capacity-distortion tradeoff region as a rather straightforward extension of single receiver channels. For general BCs, we provide inner and outer bounds on the capacity-distortion region, as well as a sufficient condition when this capacity-distortion region is equal to the product of the capacity region and the set of achievable distortions. A number of illustrative examples demonstrates that the optimal co-design schemes outperform conventional schemes that split the resources between sensing and communication.
In this paper, based on a successively accuracy-increasing approximation of the $ell_0$ norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the $ell_0$ norm can be controlled. We prove that the series of the approximations asymptotically coincides with the $ell_1$ and $ell_0$ norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted $ell_1$ minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.
108 - Biao Sun , Hui Feng , Xinxin Xu 2016
We consider the problem of sparse signal recovery from 1-bit measurements. Due to the noise present in the acquisition and transmission process, some quantized bits may be flipped to their opposite states. These sign flips may result in severe perfor mance degradation. In this study, a novel algorithm, termed HISTORY, is proposed. It consists of Hamming support detection and coefficients recovery. The HISTORY algorithm has high recovery accuracy and is robust to strong measurement noise. Numerical results are provided to demonstrate the effectiveness and superiority of the proposed algorithm.
115 - Xu Zhang , Wei Cui , 2017
Compressed sensing (CS) with prior information concerns the problem of reconstructing a sparse signal with the aid of a similar signal which is known beforehand. We consider a new approach to integrate the prior information into CS via maximizing the correlation between the prior knowledge and the desired signal. We then present a geometric analysis for the proposed method under sub-Gaussian measurements. Our results reveal that if the prior information is good enough, then the proposed approach can improve the performance of the standard CS. Simulations are provided to verify our results.
A basic information theoretic model for summarization is formulated. Here summarization is considered as the process of taking a report of $v$ binary objects, and producing from it a $j$ element subset that captures most of the important features of the original report, with importance being defined via an arbitrary set function endemic to the model. The loss of information is then measured by a weight average of variational distances, which we term the semantic loss. Our results include both cases where the probability distribution generating the $v$-length reports are known and unknown. In the case where it is known, our results demonstrate how to construct summarizers which minimize the semantic loss. For the case where the probability distribution is unknown, we show how to construct summarizers whose semantic loss when averaged uniformly over all possible distribution converges to the minimum.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا