Do you want to publish a course? Click here

Bi-level Protected Compressive Sampling

200   0   0.0 ( 0 )
 Added by Leo Yu Zhang
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Some pioneering works have investigated embedding cryptographic properties in compressive sampling (CS) in a way similar to one-time pad symmetric cipher. This paper tackles the problem of constructing a CS-based symmetric cipher under the key reuse circumstance, i.e., the cipher is resistant to common attacks even a fixed measurement matrix is used multiple times. To this end, we suggest a bi-level protected CS (BLP-CS) model which makes use of the advantage of the non-RIP measurement matrix construction. Specifically, two kinds of artificial basis mismatch techniques are investigated to construct key-related sparsifying bases. It is demonstrated that the encoding process of BLP-CS is simply a random linear projection, which is the same as the basic CS model. However, decoding the linear measurements requires knowledge of both the key-dependent sensing matrix and its sparsifying basis. The proposed model is exemplified by sampling images as a joint data acquisition and protection layer for resource-limited wireless sensors. Simulation results and numerical analyses have justified that the new model can be applied in circumstances where the measurement matrix can be re-used.



rate research

Read More

The Random Demodulator (RD) and the Modulated Wideband Converter (MWC) are two recently proposed compressed sensing (CS) techniques for the acquisition of continuous-time spectrally-sparse signals. They extend the standard CS paradigm from sampling discrete, finite dimensional signals to sampling continuous and possibly infinite dimensional ones, and thus establish the ability to capture these signals at sub-Nyquist sampling rates. The RD and the MWC have remarkably similar structures (similar block diagrams), but their reconstruction algorithms and signal models strongly differ. To date, few results exist that compare these systems, and owing to the potential impacts they could have on spectral estimation in applications like electromagnetic scanning and cognitive radio, we more fully investigate their relationship in this paper. We show that the RD and the MWC are both based on the general concept of random filtering, but employ significantly different sampling functions. We also investigate system sensitivities (or robustness) to sparse signal model assumptions. Lastly, we show that block convolution is a fundamental aspect of the MWC, allowing it to successfully sample and reconstruct block-sparse (multiband) signals. Based on this concept, we propose a new acquisition system for continuous-time signals whose amplitudes are block sparse. The paper includes detailed time and frequency domain analyses of the RD and the MWC that differ, sometimes substantially, from published results.
Fourier Transform Interferometry (FTI) is an appealing Hyperspectral (HS) imaging modality for many applications demanding high spectral resolution, e.g., in fluorescence microscopy. However, the effective resolution of FTI is limited by the durability of biological elements when exposed to illuminating light. Overexposed elements are subject to photo-bleaching and become unable to fluoresce. In this context, the acquisition of biological HS volumes based on sampling the Optical Path Difference (OPD) axis at Nyquist rate leads to unpleasant trade-offs between spectral resolution, quality of the HS volume, and light exposure intensity. We propose two variants of the FTI imager, i.e., Coded Illumination-FTI (CI-FTI) and Structured Illumination FTI (SI-FTI), based on the theory of compressive sensing (CS). These schemes efficiently modulate light exposure temporally (in CI-FTI) or spatiotemporally (in SI-FTI). Leveraging a variable density sampling strategy recently introduced in CS, we provide near-optimal illumination strategies, so that the light exposure imposed on a biological specimen is minimized while the spectral resolution is preserved. Our analysis focuses on two criteria: (i) a trade-off between exposure intensity and the quality of the reconstructed HS volume for a given spectral resolution; (ii) maximizing HS volume quality for a fixed spectral resolution and constrained exposure budget. Our contributions can be adapted to an FTI imager without hardware modifications. The reconstruction of HS volumes from CS-FTI measurements relies on an $l_1$-norm minimization problem promoting a spatiospectral sparsity prior. Numerically, we support the proposed methods on synthetic data and simulated CS measurements (from actual FTI measurements) under various scenarios. In particular, the biological HS volumes can be reconstructed with a three-to-ten-fold reduction in the light exposure.
Compressive sensing (CS) combines data acquisition with compression coding to reduce the number of measurements required to reconstruct a sparse signal. In optics, this usually takes the form of projecting the field onto sequences of random spatial patterns that are selected from an appropriate random ensemble. We show here that CS can be exploited in `native optics hardware without introducing added components. Specifically, we show that random sub-Nyquist sampling of an interferogram helps reconstruct the field modal structure. The distribution of reduced sensing matrices corresponding to random measurements is provably incoherent and isotropic, which helps us carry out CS successfully.
Distributed Compressive Sensing (DCS) improves the signal recovery performance of multi signal ensembles by exploiting both intra- and inter-signal correlation and sparsity structure. However, the existing DCS was proposed for a very limited ensemble of signals that has single common information cite{Baron:2009vd}. In this paper, we propose a generalized DCS (GDCS) which can improve sparse signal detection performance given arbitrary types of common information which are classified into not just full common information but also a variety of partial common information. The theoretical bound on the required number of measurements using the GDCS is obtained. Unfortunately, the GDCS may require much a priori-knowledge on various inter common information of ensemble of signals to enhance the performance over the existing DCS. To deal with this problem, we propose a novel algorithm that can search for the correlation structure among the signals, with which the proposed GDCS improves detection performance even without a priori-knowledge on correlation structure for the case of arbitrarily correlated multi signal ensembles.
We discuss a novel sampling theorem on the sphere developed by McEwen & Wiaux recently through an association between the sphere and the torus. To represent a band-limited signal exactly, this new sampling theorem requires less than half the number of samples of other equiangular sampling theorems on the sphere, such as the canonical Driscoll & Healy sampling theorem. A reduction in the number of samples required to represent a band-limited signal on the sphere has important implications for compressive sensing, both in terms of the dimensionality and sparsity of signals. We illustrate the impact of this property with an inpainting problem on the sphere, where we show superior reconstruction performance when adopting the new sampling theorem.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا