Do you want to publish a course? Click here

3D imaging from multipath temporal echoes

87   0   0.0 ( 0 )
 Added by Alex Turpin
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Echo-location is a broad approach to imaging and sensing that includes both man-made RADAR, LIDAR, SONAR and also animal navigation. However, full 3D information based on echo-location requires some form of scanning of the scene in order to provide the spatial location of the echo origin-points. Without this spatial information, imaging objects in 3D is a very challenging task as the inverse retrieval problem is strongly ill-posed. Here, we show that the temporal information encoded in the return echoes that are reflected multiple times within a scene is sufficient to faithfully render an image in 3D. Numerical modelling and an information theoretic perspective prove the concept and provide insight into the role of the multipath information. We experimentally demonstrate the concept by using both radio-frequency and acoustic waves for imaging individuals moving in a closed environment.



rate research

Read More

A new focal-plane three-dimensional (3D) imaging method based on temporal ghost imaging is proposed and demonstrated. By exploiting the advantages of temporal ghost imaging, this method enables slow integrating cameras have an ability of 3D surface imaging in the framework of sequential flood-illumination and focal-plane detection. The depth information of 3D objects is easily lost when imaging with traditional cameras, but it can be reconstructed with high-resolution by temporal correlation between received signals and reference signals. Combining with a two-dimensional (2D) projection image obtained by one single shot, a 3D image of the object can be achieved. The feasibility and performance of this focal-plane 3D imaging method have been verified through theoretical analysis and numerical experiments in this paper.
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum-position entanglement and photon-number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable SNR, quantum plenoptic imaging would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution SPAD arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed.
Inverse problems are essential to imaging applications. In this paper, we propose a model-based deep learning network, named FISTA-Net, by combining the merits of interpretability and generality of the model-based Fast Iterative Shrinkage/Thresholding Algorithm (FISTA) and strong regularization and tuning-free advantages of the data-driven neural network. By unfolding the FISTA into a deep network, the architecture of FISTA-Net consists of multiple gradient descent, proximal mapping, and momentum modules in cascade. Different from FISTA, the gradient matrix in FISTA-Net can be updated during iteration and a proximal operator network is developed for nonlinear thresholding which can be learned through end-to-end training. Key parameters of FISTA-Net including the gradient step size, thresholding value and momentum scalar are tuning-free and learned from training data rather than hand-crafted. We further impose positive and monotonous constraints on these parameters to ensure they converge properly. The experimental results, evaluated both visually and quantitatively, show that the FISTA-Net can optimize parameters for different imaging tasks, i.e. Electromagnetic Tomography (EMT) and X-ray Computational Tomography (X-ray CT). It outperforms the state-of-the-art model-based and deep learning methods and exhibits good generalization ability over other competitive learning-based approaches under different noise levels.
160 - Mengjia Xi , Hui Chen , Yuan Yuan 2019
Recently, ghost imaging has been attracting attentions because its mechanism would lead to many applications inaccessible to conventional imaging methods. However, it is challenging for high contrast and high resolution imaging, due to its low signal-to-noise ratio (SNR) and the demand of high sampling rate in detection. To circumvent these challenges, we here propose a ghost imaging scheme that exploits Haar wavelets as illuminating patterns with a bi-frequency light projecting system and frequency-selecting single-pixel detectors. This method provides a theoretically 100% image contrast and high detection SNR, which reduces the requirement of high dynamic range of detectors, enabling high resolution ghost imaging. Moreover, it can highly reduce the sampling rate (far below Nyquist limit) for a sparse object by adaptively abandoning unnecessary patterns during the measurement. These characteristics are experimentally verified with a resolution of 512 times 512 and a sampling rate lower than 5%. A high-resolution (1000 times 1000 times 1000) 3D reconstruction of an object is also achieved from multi-angle images.
Traditional paradigms for imaging rely on the use of a spatial structure, either in the detector (pixels arrays) or in the illumination (patterned light). Removal of the spatial structure in the detector or illumination, i.e., imaging with just a single-point sensor, would require solving a very strongly ill-posed inverse retrieval problem that to date has not been solved. Here, we demonstrate a data-driven approach in which full 3D information is obtained with just a single-point, single-photon avalanche diode that records the arrival time of photons reflected from a scene that is illuminated with short pulses of light. Imaging with single-point time-of-flight (temporal) data opens new routes in terms of speed, size, and functionality. As an example, we show how the training based on an optical time-of-flight camera enables a compact radio-frequency impulse radio detection and ranging transceiver to provide 3D images.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا