ﻻ يوجد ملخص باللغة العربية
A deep learning based non-line-of-sight (NLOS) imaging system is developed to image an occluded object off a scattering surface. The neural net is trained using only handwritten digits, and yet exhibits capability to reconstruct patterns distinct from the training set, including physical objects. It can also reconstruct a cartoon video from its scattering patterns in real time, demonstrating the robustness and generalization capability of the deep learning based approach. Several scattering surfaces with varying degree of Lambertian and specular contributions were examined experimentally; it is found that for a Lambertian surface the structural similarity index (SSIM) of reconstructed images is about 0.63, while the SSIM obtained from a scattering surface possessing a specular component can be as high as 0.93. A forward model of light transport was developed based on the Phong scattering model. Scattering patterns from Phong surfaces with different degrees of specular contribution were numerically simulated. It is found that a specular contribution of as small as 5% can enhance the SSIM from 0.83 to 0.93, consistent with the results from experimental data. Singular value spectra of the underlying transfer matrix were calculated for various Phong surfaces. As the weight and the shininess factor increase, i.e., the specular contribution increases, the singular value spectrum broadens and the 50-dB bandwidth is increased by more than 4X with a 10% specular contribution, which indicates that at the presence of even a small amount of specular contribution the NLOS measurement can retain significantly more singular value components, leading to higher reconstruction fidelity. With an ordinary camera and incoherent light source, this work enables a low-cost, real-time NLOS imaging system without the need of an explicit physical model of the underlying light transport process.
We develop a scannerless non-line-of-sight three dimensional imaging system based on a commercial 32x32 SPAD camera combined with a 70 ps pulsed laser. In our experiment, 1024 time histograms can be achieved synchronously in 3s with an average time r
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging. Previous solutions have sought to explicitly recover the 3D geometry (e.g., as point clouds) or voxel density (e.g., within a pre-defined volume) of the hidden scene. In con
We consider the non-line-of-sight (NLOS) imaging of an object using the light reflected off a diffusive wall. The wall scatters incident light such that a lens is no longer useful to form an image. Instead, we exploit the 4D spatial coherence functio
Emerging single-photon-sensitive sensors combined with advanced inverse methods to process picosecond-accurate time-stamped photon counts have given rise to unprecedented imaging capabilities. Rather than imaging photons that travel along direct path
Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects. Active NLOS imaging systems rely on the capture of the time of flight of light through the scene, and have shown great promise for t