Do you want to publish a course? Click here

A Variable Density Sampling Scheme for Compressive Fourier Transform Interferometry

105   0   0.0 ( 0 )
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Fourier Transform Interferometry (FTI) is an appealing Hyperspectral (HS) imaging modality for many applications demanding high spectral resolution, e.g., in fluorescence microscopy. However, the effective resolution of FTI is limited by the durability of biological elements when exposed to illuminating light. Overexposed elements are subject to photo-bleaching and become unable to fluoresce. In this context, the acquisition of biological HS volumes based on sampling the Optical Path Difference (OPD) axis at Nyquist rate leads to unpleasant trade-offs between spectral resolution, quality of the HS volume, and light exposure intensity. We propose two variants of the FTI imager, i.e., Coded Illumination-FTI (CI-FTI) and Structured Illumination FTI (SI-FTI), based on the theory of compressive sensing (CS). These schemes efficiently modulate light exposure temporally (in CI-FTI) or spatiotemporally (in SI-FTI). Leveraging a variable density sampling strategy recently introduced in CS, we provide near-optimal illumination strategies, so that the light exposure imposed on a biological specimen is minimized while the spectral resolution is preserved. Our analysis focuses on two criteria: (i) a trade-off between exposure intensity and the quality of the reconstructed HS volume for a given spectral resolution; (ii) maximizing HS volume quality for a fixed spectral resolution and constrained exposure budget. Our contributions can be adapted to an FTI imager without hardware modifications. The reconstruction of HS volumes from CS-FTI measurements relies on an $l_1$-norm minimization problem promoting a spatiospectral sparsity prior. Numerically, we support the proposed methods on synthetic data and simulated CS measurements (from actual FTI measurements) under various scenarios. In particular, the biological HS volumes can be reconstructed with a three-to-ten-fold reduction in the light exposure.



rate research

Read More

Compressive sensing (CS) combines data acquisition with compression coding to reduce the number of measurements required to reconstruct a sparse signal. In optics, this usually takes the form of projecting the field onto sequences of random spatial patterns that are selected from an appropriate random ensemble. We show here that CS can be exploited in `native optics hardware without introducing added components. Specifically, we show that random sub-Nyquist sampling of an interferogram helps reconstruct the field modal structure. The distribution of reduced sensing matrices corresponding to random measurements is provably incoherent and isotropic, which helps us carry out CS successfully.
Some pioneering works have investigated embedding cryptographic properties in compressive sampling (CS) in a way similar to one-time pad symmetric cipher. This paper tackles the problem of constructing a CS-based symmetric cipher under the key reuse circumstance, i.e., the cipher is resistant to common attacks even a fixed measurement matrix is used multiple times. To this end, we suggest a bi-level protected CS (BLP-CS) model which makes use of the advantage of the non-RIP measurement matrix construction. Specifically, two kinds of artificial basis mismatch techniques are investigated to construct key-related sparsifying bases. It is demonstrated that the encoding process of BLP-CS is simply a random linear projection, which is the same as the basic CS model. However, decoding the linear measurements requires knowledge of both the key-dependent sensing matrix and its sparsifying basis. The proposed model is exemplified by sampling images as a joint data acquisition and protection layer for resource-limited wireless sensors. Simulation results and numerical analyses have justified that the new model can be applied in circumstances where the measurement matrix can be re-used.
The Random Demodulator (RD) and the Modulated Wideband Converter (MWC) are two recently proposed compressed sensing (CS) techniques for the acquisition of continuous-time spectrally-sparse signals. They extend the standard CS paradigm from sampling discrete, finite dimensional signals to sampling continuous and possibly infinite dimensional ones, and thus establish the ability to capture these signals at sub-Nyquist sampling rates. The RD and the MWC have remarkably similar structures (similar block diagrams), but their reconstruction algorithms and signal models strongly differ. To date, few results exist that compare these systems, and owing to the potential impacts they could have on spectral estimation in applications like electromagnetic scanning and cognitive radio, we more fully investigate their relationship in this paper. We show that the RD and the MWC are both based on the general concept of random filtering, but employ significantly different sampling functions. We also investigate system sensitivities (or robustness) to sparse signal model assumptions. Lastly, we show that block convolution is a fundamental aspect of the MWC, allowing it to successfully sample and reconstruct block-sparse (multiband) signals. Based on this concept, we propose a new acquisition system for continuous-time signals whose amplitudes are block sparse. The paper includes detailed time and frequency domain analyses of the RD and the MWC that differ, sometimes substantially, from published results.
In this paper, we redefine the Graph Fourier Transform (GFT) under the DSP$_mathrm{G}$ framework. We consider the Jordan eigenvectors of the directed Laplacian as graph harmonics and the corresponding eigenvalues as the graph frequencies. For this purpose, we propose a shift operator based on the directed Laplacian of a graph. Based on our shift operator, we then define total variation of graph signals, which is used in frequency ordering. We achieve natural frequency ordering and interpretation via the proposed definition of GFT. Moreover, we show that our proposed shift operator makes the LSI filters under DSP$_mathrm{G}$ to become polynomial in the directed Laplacian.
Imaging data from upcoming radio telescopes requires distributing processing at large scales. This paper presents a distributed Fourier transform algorithm for radio interferometry processing. It generates arbitrary grid chunks with full non-coplanarity corrections while minimising memory residency, data transfer and compute work. We utilise window functions to isolate the influence between regions of grid and image space. This allows us to distribute image data between nodes and construct parts of grid space exactly when and where needed. The developed prototype easily handles image data terabytes in size, while generating visibilities at great throughput and accuracy. Scaling is demonstrated to be better than cubic in baseline length, reducing the risk involved in growing radio astronomy processing to the Square Kilometre Array and similar telescopes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا