ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-line-of-Sight Imaging via Neural Transient Fields

131   0   0.0 ( 0 )
 نشر من قبل Siyuan Shen
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging. Previous solutions have sought to explicitly recover the 3D geometry (e.g., as point clouds) or voxel density (e.g., within a pre-defined volume) of the hidden scene. In contrast, inspired by the recent Neural Radiance Field (NeRF) approach, we use a multi-layer perceptron (MLP) to represent the neural transient field or NeTF. However, NeTF measures the transient over spherical wavefronts rather than the radiance along lines. We therefore formulate a spherical volume NeTF reconstruction pipeline, applicable to both confocal and non-confocal setups. Compared with NeRF, NeTF samples a much sparser set of viewpoints (scanning spots) and the sampling is highly uneven. We thus introduce a Monte Carlo technique to improve the robustness in the reconstruction. Comprehensive experiments on synthetic and real datasets demonstrate NeTF provides higher quality reconstruction and preserves fine details largely missing in the state-of-the-art.

قيم البحث

اقرأ أيضاً

Non-line-of-sight (NLOS) imaging is based on capturing the multi-bounce indirect reflections from the hidden objects. Active NLOS imaging systems rely on the capture of the time of flight of light through the scene, and have shown great promise for t he accurate and robust reconstruction of hidden scenes without the need for specialized scene setups and prior assumptions. Despite that existing methods can reconstruct 3D geometries of the hidden scene with excellent depth resolution, accurately recovering object textures and appearance with high lateral resolution remains an challenging problem. In this work, we propose a new problem formulation, called NLOS photography, to specifically address this deficiency. Rather than performing an intermediate estimate of the 3D scene geometry, our method follows a data-driven approach and directly reconstructs 2D images of a NLOS scene that closely resemble the pictures taken with a conventional camera from the location of the relay wall. This formulation largely simplifies the challenging reconstruction problem by bypassing the explicit modeling of 3D geometry, and enables the learning of a deep model with a relatively small training dataset. The results are NLOS reconstructions of unprecedented lateral resolution and image quality.
Non-line-of-sight (NLOS) imaging techniques use light that diffusely reflects off of visible surfaces (e.g., walls) to see around corners. One approach involves using pulsed lasers and ultrafast sensors to measure the travel time of multiply scattere d light. Unlike existing NLOS techniques that generally require densely raster scanning points across the entirety of a relay wall, we explore a more efficient form of NLOS scanning that reduces both acquisition times and computational requirements. We propose a circular and confocal non-line-of-sight (C2NLOS) scan that involves illuminating and imaging a common point, and scanning this point in a circular path along a wall. We observe that (1) these C2NLOS measurements consist of a superposition of sinusoids, which we refer to as a transient sinogram, (2) there exists computationally efficient reconstruction procedures that transform these sinusoidal measurements into 3D positions of hidden scatterers or NLOS images of hidden objects, and (3) despite operating on an order of magnitude fewer measurements than previous approaches, these C2NLOS scans provide sufficient information about the hidden scene to solve these different NLOS imaging tasks. We show results from both simulated and real C2NLOS scans.
Emerging single-photon-sensitive sensors combined with advanced inverse methods to process picosecond-accurate time-stamped photon counts have given rise to unprecedented imaging capabilities. Rather than imaging photons that travel along direct path s from a source to an object and back to the detector, non-line-of-sight (NLOS) imaging approaches analyse photons {scattered from multiple surfaces that travel} along indirect light paths to estimate 3D images of scenes outside the direct line of sight of a camera, hidden by a wall or other obstacles. Here we review recent advances in the field of NLOS imaging, discussing how to see around corners and future prospects for the field.
Time of flight based Non-line-of-sight (NLOS) imaging approaches require precise calibration of illumination and detector positions on the visible scene to produce reasonable results. If this calibration error is sufficiently high, reconstruction can fail entirely without any indication to the user. In this work, we highlight the necessity of building autocalibration into NLOS reconstruction in order to handle mis-calibration. We propose a forward model of NLOS measurements that is differentiable with respect to both, the hidden scene albedo, and virtual illumination and detector positions. With only a mean squared error loss and no regularization, our model enables joint reconstruction and recovery of calibration parameters by minimizing the measurement residual using gradient descent. We demonstrate our method is able to produce robust reconstructions using simulated and real data where the calibration error applied causes other state of the art algorithms to fail.
371 - Chen Zhou 2020
A deep learning based non-line-of-sight (NLOS) imaging system is developed to image an occluded object off a scattering surface. The neural net is trained using only handwritten digits, and yet exhibits capability to reconstruct patterns distinct fro m the training set, including physical objects. It can also reconstruct a cartoon video from its scattering patterns in real time, demonstrating the robustness and generalization capability of the deep learning based approach. Several scattering surfaces with varying degree of Lambertian and specular contributions were examined experimentally; it is found that for a Lambertian surface the structural similarity index (SSIM) of reconstructed images is about 0.63, while the SSIM obtained from a scattering surface possessing a specular component can be as high as 0.93. A forward model of light transport was developed based on the Phong scattering model. Scattering patterns from Phong surfaces with different degrees of specular contribution were numerically simulated. It is found that a specular contribution of as small as 5% can enhance the SSIM from 0.83 to 0.93, consistent with the results from experimental data. Singular value spectra of the underlying transfer matrix were calculated for various Phong surfaces. As the weight and the shininess factor increase, i.e., the specular contribution increases, the singular value spectrum broadens and the 50-dB bandwidth is increased by more than 4X with a 10% specular contribution, which indicates that at the presence of even a small amount of specular contribution the NLOS measurement can retain significantly more singular value components, leading to higher reconstruction fidelity. With an ordinary camera and incoherent light source, this work enables a low-cost, real-time NLOS imaging system without the need of an explicit physical model of the underlying light transport process.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا