No Arabic abstract
We report experiments conducted in the field in the presence of fog, that were aimed at imaging under poor visibility. By means of intensity modulation at the source and two-dimensional quadrature lock-in detection by software at the receiver, a significant enhancement of the contrast-to-noise ratio was achieved in the imaging of beacons over hectometric distances. Further by illuminating the field of view with a modulated source, the technique helped reveal objects that were earlier obscured due to multiple scattering of light. This method, thus, holds promise of aiding in various forms of navigation under poor visibility due to fog.
In Polarization Discrimination Imaging, the amplitude of a sinusoid from a rotating analyzer, representing residual polarized light and carrying information on the object, is detected with the help of a lock-in amplifier. When turbidity increases beyond a level, the lock-in amplifier fails to detect the weak sinusoidal component in the transmitted light. In this work we have employed the principle of Stochastic Resonance and used a 3-level quantizer to detect the amplitude of the sinusoids, which was not detectable with a lock-in amplifier. In using the three level quantizer we have employed three different approaches to extract the amplitude of the weak sinusoids: (a) using the probability of the quantized output to crossover a certain threshold in the quantizer (b) maximizing the likelihood function for the quantized detected intensity data and (c) arriving at an expression for the expected power in the detected output and comparing it with the experimentally measured power. We have proven these non-linear estimation methods by detecting the hidden object from experimental data from a polarization discrimination imaging system. When the turbidity increased to L/l = 5.05 (l is the transport mean-free-path and L is the thickness of the turbid medium) the data through analysis by the proposed methods revealed the presence of the object from the estimated amplitudes. This was not possible by using only the lock-in amplifier system.
Multi-shot echo planar imaging (msEPI) is a promising approach to achieve high in-plane resolution with high sampling efficiency and low T2* blurring. However, due to the geometric distortion, shot-to-shot phase variations and potential subject motion, msEPI continues to be a challenge in MRI. In this work, we introduce acquisition and reconstruction strategies for robust, high-quality msEPI without phase navigators. We propose Blip Up-Down Acquisition (BUDA) using interleaved blip-up and -down phase encoding, and incorporate B0 forward-modeling into Hankel structured low-rank model to enable distortion- and navigator-free msEPI. We improve the acquisition efficiency and reconstruction quality by incorporating simultaneous multi-slice acquisition and virtual-coil reconstruction into the BUDA technique. We further combine BUDA with the novel RF-encoded gSlider acquisition, dubbed BUDA-gSlider, to achieve rapid high isotropic-resolution MRI. Deploying BUDA-gSlider with model-based reconstruction allows for distortion-free whole-brain 1mm isotropic T2 mapping in about 1 minute. It also provides whole-brain 1mm isotropic diffusion imaging with high geometric fidelity and SNR efficiency. We finally incorporate sinusoidal wave gradients during the EPI readout to better use coil sensitivity encoding with controlled aliasing.
Hyperspectral image (HSI) contains both spatial pattern and spectral information which has been widely used in food safety, remote sensing, and medical detection. However, the acquisition of hyperspectral images is usually costly due to the complicated apparatus for the acquisition of optical spectrum. Recently, it has been reported that HSI can be reconstructed from single RGB image using convolution neural network (CNN) algorithms. Compared with the traditional hyperspectral cameras, the method based on CNN algorithms is simple, portable and low cost. In this study, we focused on the influence of the RGB camera spectral sensitivity (CSS) on the HSI. A Xenon lamp incorporated with a monochromator were used as the standard light source to calibrate the CSS. And the experimental results show that the CSS plays a significant role in the reconstruction accuracy of an HSI. In addition, we proposed a new HSI reconstruction network where the dimensional structure of the original hyperspectral datacube was modified by 3D matrix transpose to improve the reconstruction accuracy.
The fusion of multimodal sensor streams, such as camera, lidar, and radar measurements, plays a critical role in object detection for autonomous vehicles, which base their decision making on these inputs. While existing methods exploit redundant information in good environmental conditions, they fail in adverse weather where the sensory streams can be asymmetrically distorted. These rare edge-case scenarios are not represented in available datasets, and existing fusion architectures are not designed to handle them. To address this challenge we present a novel multimodal dataset acquired in over 10,000km of driving in northern Europe. Although this dataset is the first large multimodal dataset in adverse weather, with 100k labels for lidar, camera, radar, and gated NIR sensors, it does not facilitate training as extreme weather is rare. To this end, we present a deep fusion network for robust fusion without a large corpus of labeled training data covering all asymmetric distortions. Departing from proposal-level fusion, we propose a single-shot model that adaptively fuses features, driven by measurement entropy. We validate the proposed method, trained on clean data, on our extensive validation dataset. Code and data are available here https://github.com/princeton-computational-imaging/SeeingThroughFog.
Ghost imaging (GI) is a novel imaging technique based on the second-order correlation of light fields. Due to limited number of samplings in practice, traditional GI methods often reconstruct objects with unsatisfactory quality. To improve the imaging results, many reconstruction methods have been developed, yet the reconstruction quality is still fundamentally restricted by the modulated light fields. In this paper, we propose to improve the imaging quality of GI by optimizing the light fields, which is realized via matrix optimization for a learned dictionary incorporating the sparsity prior of objects. A closed-form solution of the sampling matrix, which enables successive sampling, is derived. Through simulation and experimental results, it is shown that the proposed scheme leads to better imaging quality compared to the state-of-the-art optimization methods for light fields, especially at a low sampling rate.