ترغب بنشر مسار تعليمي؟ اضغط هنا

Non-line-of-sight Imaging

98   0   0.0 ( 0 )
 نشر من قبل Daniele Faccio
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Emerging single-photon-sensitive sensors combined with advanced inverse methods to process picosecond-accurate time-stamped photon counts have given rise to unprecedented imaging capabilities. Rather than imaging photons that travel along direct paths from a source to an object and back to the detector, non-line-of-sight (NLOS) imaging approaches analyse photons {scattered from multiple surfaces that travel} along indirect light paths to estimate 3D images of scenes outside the direct line of sight of a camera, hidden by a wall or other obstacles. Here we review recent advances in the field of NLOS imaging, discussing how to see around corners and future prospects for the field.

قيم البحث

اقرأ أيضاً

Non-line-of-sight (NLOS) imaging enables monitoring around corners and is promising for diverse applications. The resolution of transient NLOS imaging is limited to a centimeter scale, mainly by the temporal resolution of the detectors. Here, we cons truct an up-conversion single-photon detector with a high temporal resolution of ~1.4 ps and a low noise count rate of 5 counts per second (cps). Notably, the detector operates at room temperature, near-infrared wavelength. Using this detector, we demonstrate high-resolution and low-noise NLOS imaging. Our system can provide a 180 {mu}m axial resolution and a 2 mm lateral resolution, which is more than one order of magnitude better than that in previous experiments. These results open avenues for high-resolution NLOS imaging techniques in relevant applications.
164 - Dayu Zhu , Wenshan Cai 2021
Conventional imaging only records photons directly sent from the object to the detector, while non-line-of-sight (NLOS) imaging takes the indirect light into account. Most NLOS solutions employ a transient scanning process, followed by a physical bas ed algorithm to reconstruct the NLOS scenes. However, the transient detection requires sophisticated apparatus, with long scanning time and low robustness to ambient environment, and the reconstruction algorithms are typically time-consuming and computationally expensive. Here we propose a new NLOS solution to address the above defects, with innovations on both equipment and algorithm. We apply inexpensive commercial Lidar for detection, with much higher scanning speed and better compatibility to real-world imaging. Our reconstruction framework is deep learning based, with a generative two-step remapping strategy to guarantee high reconstruction fidelity. The overall detection and reconstruction process allows for millisecond responses, with reconstruction precision of millimeter level. We have experimentally tested the proposed solution on both synthetic and real objects, and further demonstrated our method to be applicable to full-color NLOS imaging.
We consider the non-line-of-sight (NLOS) imaging of an object using the light reflected off a diffusive wall. The wall scatters incident light such that a lens is no longer useful to form an image. Instead, we exploit the 4D spatial coherence functio n to reconstruct a 2D projection of the obscured object. The approach is completely passive in the sense that no control over the light illuminating the object is assumed and is compatible with the partially coherent fields ubiquitous in both the indoor and outdoor environments. We formulate a multi-criteria convex optimization problem for reconstruction, which fuses the reflected fields intensity and spatial coherence information at different scales. Our formulation leverages established optics models of light propagation and scattering and exploits the sparsity common to many images in different bases. We also develop an algorithm based on the alternating direction method of multipliers to efficiently solve the convex program proposed. A means for analyzing the null space of the measurement matrices is provided as well as a means for weighting the contribution of individual measurements to the reconstruction. This paper holds promise to advance passive imaging in the challenging NLOS regimes in which the intensity does not necessarily retain distinguishable features and provides a framework for multi-modal information fusion for efficient scene reconstruction.
Non-Line-of-Sight (NLOS) imaging allows to observe objects partially or fully occluded from direct view, by analyzing indirect diffuse reflections off a secondary, relay surface. Despite its many potential applications, existing methods lack practica l usability due to several shared limitations, including the assumption of single scattering only, lack of occlusions, and Lambertian reflectance. We lift these limitations by transforming the NLOS problem into a virtual Line-Of-Sight (LOS) one. Since imaging information cannot be recovered from the irradiance arriving at the relay surface, we introduce the concept of the phasor field, a mathematical construct representing a fast variation in irradiance. We show that NLOS light transport can be modeled as the propagation of a phasor field wave, which can be solved accurately by the Rayleigh-Sommerfeld diffraction integral. We demonstrate for the first time NLOS reconstruction of complex scenes with strong multiply scattered and ambient light, arbitrary materials, large depth range, and occlusions. Our method handles these challenging cases without explicitly developing a light transport model. By leveraging existing fast algorithms, we outperform existing methods in terms of execution speed, computational complexity, and memory use. We believe that our approach will help unlock the potential of NLOS imaging, and the development of novel applications not restricted to lab conditions. For example, we demonstrate both refocusing and transient NLOS videos of real-world, complex scenes with large depth.
130 - Siyuan Shen , Zi Wang , Ping Liu 2021
We present a neural modeling framework for Non-Line-of-Sight (NLOS) imaging. Previous solutions have sought to explicitly recover the 3D geometry (e.g., as point clouds) or voxel density (e.g., within a pre-defined volume) of the hidden scene. In con trast, inspired by the recent Neural Radiance Field (NeRF) approach, we use a multi-layer perceptron (MLP) to represent the neural transient field or NeTF. However, NeTF measures the transient over spherical wavefronts rather than the radiance along lines. We therefore formulate a spherical volume NeTF reconstruction pipeline, applicable to both confocal and non-confocal setups. Compared with NeRF, NeTF samples a much sparser set of viewpoints (scanning spots) and the sampling is highly uneven. We thus introduce a Monte Carlo technique to improve the robustness in the reconstruction. Comprehensive experiments on synthetic and real datasets demonstrate NeTF provides higher quality reconstruction and preserves fine details largely missing in the state-of-the-art.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا