ترغب بنشر مسار تعليمي؟ اضغط هنا

Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding

119   0   0.0 ( 0 )
 نشر من قبل Vardan Papyan
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeling low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been recently proposed, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. While several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In this work, we address the theoretical aspects of the convolutional sparse model providing the first meaningful answers to questions of uniqueness of solutions and success of pursuit algorithms, both greedy and convex relaxations, in ideal and noisy regimes. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, mutual coherence, Spark and RIP to their counterparts in the convolutional setting, intrinsically capturing local measures of the global model. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.



قيم البحث

اقرأ أيضاً

The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeli ng low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been proposed recently, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. Although several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In the first of this two-part work, we address the theoretical aspects of the sparse convolutional model, providing the first meaningful answers to corresponding questions of uniqueness of solutions and success of pursuit algorithms. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, the mutual coherence and the Spark, to their counterparts in the convolutional setting, which intrinsically capture local measures of the global model. In a companion paper, we extend the analysis to a noisy regime, addressing the stability of the sparsest solutions and pursuit algorithms, and demonstrate practical approaches for solving the global pursuit problem via simple local processing.
The convolutional sparse model has recently gained increasing attention in the signal and image processing communities, and several methods have been proposed for solving the pursuit problem emerging from it -- in particular its convex relaxation, Ba sis Pursuit. In the first of this two-part work, we have provided a theoretical back-bone for this model, providing guarantees for the uniqueness of the sparsest solution and for the success of pursuit algorithms by introducing the notion of stripe sparsity and other related measures. Herein, we extend the analysis to a noisy regime, thereby considering signal perturbations and model deviations. We address questions of stability of the sparsest solutions and the success of pursuit algorithms, both greedy and convex. Classical definitions such as the RIP are generalized to the convolutional model, and existing notions such as the ERC are connected to our setting. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.
The sparsity in levels model recently inspired a new generation of effective acquisition and reconstruction modalities for compressive imaging. Moreover, it naturally arises in various areas of signal processing such as parallel acquisition, radar, a nd the sparse corruptions problem. Reconstruction strategies for sparse in levels signals usually rely on a suitable convex optimization program. Notably, although iterative and greedy algorithms can outperform convex optimization in terms of computational efficiency and have been studied extensively in the case of standard sparsity, little is known about their generalizations to the sparse in levels setting. In this paper, we bridge this gap by showing new stable and robust uniform recovery guarantees for sparse in level variants of the iterative hard thresholding and the CoSaMP algorithms. Our theoretical analysis generalizes recovery guarantees currently available in the case of standard sparsity and favorably compare to sparse in levels guarantees for weighted $ell^1$ minimization. In addition, we also propose and numerically test an extension of the orthogonal matching pursuit algorithm for sparse in levels signals.
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies $N$, and in the asymptotic limit as $N rightarrow infty$. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here, adaptive measurements are those for which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite $N$) and scaling of this error in the asymptotic limit. In the asymptotic limit, adaptive schemes have no advantage over the optimal fixed local scheme, and except for states with less than 2% mixture, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme. For finite $N$, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme, for any degree of mixture.
Compressed Sensing aims to capture attributes of a sparse signal using very few measurements. Cand`{e}s and Tao showed that sparse reconstruction is possible if the sensing matrix acts as a near isometry on all $boldsymbol{k}$-sparse signals. This pr operty holds with overwhelming probability if the entries of the matrix are generated by an iid Gaussian or Bernoulli process. There has been significant recent interest in an alternative signal processing framework; exploiting deterministic sensing matrices that with overwhelming probability act as a near isometry on $boldsymbol{k}$-sparse vectors with uniformly random support, a geometric condition that is called the Statistical Restricted Isometry Property or StRIP. This paper considers a family of deterministic sensing matrices satisfying the StRIP that are based on srm codes (binary chirps) and a $boldsymbol{k}$-sparse reconstruction algorithm with sublinear complexity. In the presence of stochastic noise in the data domain, this paper derives bounds on the $boldsymbol{ell_2}$ accuracy of approximation in terms of the $boldsymbol{ell_2}$ norm of the measurement noise and the accuracy of the best $boldsymbol{k}$-sparse approximation, also measured in the $boldsymbol{ell_2}$ norm. This type of $boldsymbol{ell_2 /ell_2}$ bound is tighter than the standard $boldsymbol{ell_2 /ell_1}$ or $boldsymbol{ell_1/ ell_1}$ bounds.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا