ترغب بنشر مسار تعليمي؟ اضغط هنا

Encoder blind combinatorial compressed sensing

84   0   0.0 ( 0 )
 نشر من قبل Michael Murray
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In its most elementary form, compressed sensing studies the design of decoding algorithms to recover a sufficiently sparse vector or code from a lower dimensional linear measurement vector. Typically it is assumed that the decoder has access to the encoder matrix, which in the combinatorial case is sparse and binary. In this paper we consider the problem of designing a decoder to recover a set of sparse codes from their linear measurements alone, that is without access to encoder matrix. To this end we study the matrix factorisation task of recovering both the encoder and sparse coding matrices from the associated linear measurement matrix. The contribution of this paper is a computationally efficient decoding algorithm, Decoder-Expander Based Factorisation, with strong performance guarantees. In particular, under mild assumptions on the sparse coding matrix and by deploying a novel random encoder matrix, we prove that Decoder-Expander Based Factorisation recovers both the encoder and sparse coding matrix at the optimal measurement rate with high probability and from a near optimal number of measurement vectors. In addition, our experiments demonstrate the efficacy and computational efficiency of our algorithm in practice. Beyond compressed sensing our results may be of interest for researchers working in areas such as linear sketching, coding theory and matrix compression.

قيم البحث

اقرأ أيضاً

A reinforcement-learning-based non-uniform compressed sensing (NCS) framework for time-varying signals is introduced. The proposed scheme, referred to as RL-NCS, aims to boost the performance of signal recovery through an optimal and adaptive distrib ution of sensing energy among two groups of coefficients of the signal, referred to as the region of interest (ROI) coefficients and non-ROI coefficients. The coefficients in ROI usually have greater importance and need to be reconstructed with higher accuracy compared to non-ROI coefficients. In order to accomplish this task, the ROI is predicted at each time step using two specific approaches. One of these approaches incorporates a long short-term memory (LSTM) network for the prediction. The other approach employs the previous ROI information for predicting the next step ROI. Using the exploration-exploitation technique, a Q-network learns to choose the best approach for designing the measurement matrix. Furthermore, a joint loss function is introduced for the efficient training of the Q-network as well as the LSTM network. The result indicates a significant performance gain for our proposed method, even for rapidly varying signals and a reduced number of measurements.
In applications of scanning probe microscopy, images are acquired by raster scanning a point probe across a sample. Viewed from the perspective of compressed sensing (CS), this pointwise sampling scheme is inefficient, especially when the target imag e is structured. While replacing point measurements with delocalized, incoherent measurements has the potential to yield order-of-magnitude improvements in scan time, implementing the delocalized measurements of CS theory is challenging. In this paper we study a partially delocalized probe construction, in which the point probe is replaced with a continuous line, creating a sensor which essentially acquires line integrals of the target image. We show through simulations, rudimentary theoretical analysis, and experiments, that these line measurements can image sparse samples far more efficiently than traditional point measurements, provided the local features in the sample are enough separated. Despite this promise, practical reconstruction from line measurements poses additional difficulties: the measurements are partially coherent, and real measurements exhibit nonidealities. We show how to overcome these limitations using natural strategies (reweighting to cope with coherence, blind calibration for nonidealities), culminating in an end-to-end demonstration.
We characterize the measurement complexity of compressed sensing of signals drawn from a known prior distribution, even when the support of the prior is the entire space (rather than, say, sparse vectors). We show for Gaussian measurements and emph{a ny} prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees. Moreover, this result is robust to model mismatch, as long as the distribution estimate (e.g., from an invertible generative model) is close to the true distribution in Wasserstein distance. We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
A pre-trained generator has been frequently adopted in compressed sensing (CS) due to its ability to effectively estimate signals with the prior of NNs. In order to further refine the NN-based prior, we propose a framework that allows the generator t o utilize additional information from a given measurement for prior learning, thereby yielding more accurate prediction for signals. As our framework has a simple form, it is easily applied to existing CS methods using pre-trained generators. We demonstrate through extensive experiments that our framework exhibits uniformly superior performances by large margin and can reduce the reconstruction error up to an order of magnitude for some applications. We also explain the experimental success in theory by showing that our framework can slightly relax the stringent signal presence condition, which is required to guarantee the success of signal recovery.
The realisation of sensing modalities based on the principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator, which is necessary during signal recovery, and its actual physical implemen tation, which can amply differ from the assumed model. In this paper we tackle the bilinear inverse problem of recovering a sparse input signal and some unknown, unstructured multiplicative factors affecting the sensors that capture each compressive measurement. Our methodology relies on collecting a few snapshots under new draws of the sensing operator, and applying a greedy algorithm based on projected gradient descent and the principles of iterative hard thresholding. We explore empirically the sample complexity requirements of this algorithm by testing its phase transition, and show in a practically relevant instance of this problem for compressive imaging that the exact solution can be obtained with only a few snapshots.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا