ترغب بنشر مسار تعليمي؟ اضغط هنا

Working Locally Thinking Globally - Part II: Stability and Algorithms for Convolutional Sparse Coding

175   0   0.0 ( 0 )
 نشر من قبل Jeremias Sulam
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The convolutional sparse model has recently gained increasing attention in the signal and image processing communities, and several methods have been proposed for solving the pursuit problem emerging from it -- in particular its convex relaxation, Basis Pursuit. In the first of this two-part work, we have provided a theoretical back-bone for this model, providing guarantees for the uniqueness of the sparsest solution and for the success of pursuit algorithms by introducing the notion of stripe sparsity and other related measures. Herein, we extend the analysis to a noisy regime, thereby considering signal perturbations and model deviations. We address questions of stability of the sparsest solutions and the success of pursuit algorithms, both greedy and convex. Classical definitions such as the RIP are generalized to the convolutional model, and existing notions such as the ERC are connected to our setting. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.

قيم البحث

اقرأ أيضاً

The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeli ng low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been proposed recently, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. Although several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In the first of this two-part work, we address the theoretical aspects of the sparse convolutional model, providing the first meaningful answers to corresponding questions of uniqueness of solutions and success of pursuit algorithms. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, the mutual coherence and the Spark, to their counterparts in the convolutional setting, which intrinsically capture local measures of the global model. In a companion paper, we extend the analysis to a noisy regime, addressing the stability of the sparsest solutions and pursuit algorithms, and demonstrate practical approaches for solving the global pursuit problem via simple local processing.
The celebrated sparse representation model has led to remarkable results in various signal processing tasks in the last decade. However, despite its initial purpose of serving as a global prior for entire signals, it has been commonly used for modeli ng low dimensional patches due to the computational constraints it entails when deployed with learned dictionaries. A way around this problem has been recently proposed, adopting a convolutional sparse representation model. This approach assumes that the global dictionary is a concatenation of banded Circulant matrices. While several works have presented algorithmic solutions to the global pursuit problem under this new model, very few truly-effective guarantees are known for the success of such methods. In this work, we address the theoretical aspects of the convolutional sparse model providing the first meaningful answers to questions of uniqueness of solutions and success of pursuit algorithms, both greedy and convex relaxations, in ideal and noisy regimes. To this end, we generalize mathematical quantities, such as the $ell_0$ norm, mutual coherence, Spark and RIP to their counterparts in the convolutional setting, intrinsically capturing local measures of the global model. On the algorithmic side, we demonstrate how to solve the global pursuit problem by using simple local processing, thus offering a first of its kind bridge between global modeling of signals and their patch-based local treatment.
In the two-part paper, we consider the problem of secure network coding when the information rate and the security level can change over time. To efficiently solve this problem, we put forward local-encoding-preserving secure network coding, where a family of secure linear network codes (SLNCs) is called local-encoding-preserving (LEP) if all the SLNCs in this family share a common local encoding kernel at each intermediate node in the network. In this paper (Part II), we first consider the design of a family of LEP SLNCs for a fixed rate and a flexible security level. We present a novel and efficient approach for constructing upon an SLNC that exists an LEP SLNC with the same rate and the security level increased by one. Next, we consider the design of a family of LEP SLNCs for a fixed dimension (equal to the sum of rate and security level) and a flexible pair of rate and security level. We propose another novel approach for designing an SLNC such that the same SLNC can be applied for all the rate and security-level pairs with the fixed dimension. Also, two polynomial-time algorithms are developed for efficient implementations of our two approaches, respectively. Furthermore, we prove that both approaches do not incur any penalty on the required field size for the existence of SLNCs in terms of the best known lower bound by Guang and Yeung. Finally, we consider the ultimate problem of designing a family of LEP SLNCs that can be applied to all possible pairs of rate and security level. By combining the construction of a family of LEP SLNCs for a fixed security level and a flexible rate (obtained in Part I) with the constructions of the two families of LEP SLNCs in the current paper in suitable ways, we can obtain a family of LEP SLNCs that can be applied for all possible pairs of rate and security level. Three possible such constructions are presented.
The sparsity in levels model recently inspired a new generation of effective acquisition and reconstruction modalities for compressive imaging. Moreover, it naturally arises in various areas of signal processing such as parallel acquisition, radar, a nd the sparse corruptions problem. Reconstruction strategies for sparse in levels signals usually rely on a suitable convex optimization program. Notably, although iterative and greedy algorithms can outperform convex optimization in terms of computational efficiency and have been studied extensively in the case of standard sparsity, little is known about their generalizations to the sparse in levels setting. In this paper, we bridge this gap by showing new stable and robust uniform recovery guarantees for sparse in level variants of the iterative hard thresholding and the CoSaMP algorithms. Our theoretical analysis generalizes recovery guarantees currently available in the case of standard sparsity and favorably compare to sparse in levels guarantees for weighted $ell^1$ minimization. In addition, we also propose and numerically test an extension of the orthogonal matching pursuit algorithm for sparse in levels signals.
We theoretically investigate schemes to discriminate between two nonorthogonal quantum states given multiple copies. We consider a number of state discrimination schemes as applied to nonorthogonal, mixed states of a qubit. In particular, we examine the difference that local and global optimization of local measurements makes to the probability of obtaining an erroneous result, in the regime of finite numbers of copies $N$, and in the asymptotic limit as $N rightarrow infty$. Five schemes are considered: optimal collective measurements over all copies, locally optimal local measurements in a fixed single-qubit measurement basis, globally optimal fixed local measurements, locally optimal adaptive local measurements, and globally optimal adaptive local measurements. Here, adaptive measurements are those for which the measurement basis can depend on prior measurement results. For each of these measurement schemes we determine the probability of error (for finite $N$) and scaling of this error in the asymptotic limit. In the asymptotic limit, adaptive schemes have no advantage over the optimal fixed local scheme, and except for states with less than 2% mixture, the most naive scheme (locally optimal fixed local measurements) is as good as any noncollective scheme. For finite $N$, however, the most sophisticated local scheme (globally optimal adaptive local measurements) is better than any other noncollective scheme, for any degree of mixture.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا