ترغب بنشر مسار تعليمي؟ اضغط هنا

Ultrafast Parallel LiDAR with Time-encoding and Spectral Scanning: Breaking the Time-of-flight Limit

50   0   0.0 ( 0 )
 نشر من قبل Zihan Zang
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Light detection and ranging (LiDAR) has been widely used in autonomous driving and large-scale manufacturing. Although state-of-the-art scanning LiDAR can perform long-range three-dimensional imaging, the frame rate is limited by both round-trip delay and the beam steering speed, hindering the development of high-speed autonomous vehicles. For hundred-meter level ranging applications, a several-time speedup is highly desirable. Here, we uniquely combine fiber-based encoders with wavelength-division multiplexing devices to implement all-optical time-encoding on the illumination light. Using this method, parallel detection and fast inertia-free spectral scanning can be achieved simultaneously with single-pixel detection. As a result, the frame rate of a scanning LiDAR can be multiplied with scalability. We demonstrate a 4.4-fold speedup for a maximum 75-m detection range, compared with a time-of-flight-limited laser ranging system. This approach has the potential to improve the velocity of LiDAR-based autonomous vehicles to the regime of hundred kilometers per hour and open up a new paradigm for ultrafast-frame-rate LiDAR imaging.

قيم البحث

اقرأ أيضاً

Indirect Time-of-Flight (iToF) cameras are a promising depth sensing technology. However, they are prone to errors caused by multi-path interference (MPI) and low signal-to-noise ratio (SNR). Traditional methods, after denoising, mitigate MPI by esti mating a transient image that encodes depths. Recently, data-driven methods that jointly denoise and mitigate MPI have become state-of-the-art without using the intermediate transient representation. In this paper, we propose to revisit the transient representation. Using data-driven priors, we interpolate/extrapolate iToF frequencies and use them to estimate the transient image. Given direct ToF (dToF) sensors capture transient images, we name our method iToF2dToF. The transient representation is flexible. It can be integrated with different rule-based depth sensing algorithms that are robust to low SNR and can deal with ambiguous scenarios that arise in practice (e.g., specular MPI, optical cross-talk). We demonstrate the benefits of iToF2dToF over previous methods in real depth sensing scenarios.
Frequency to time mapping is a powerful technique for observing ultrafast phenomena and non-repetitive events in optics. However, many optical sources operate in wavelength regions, or at power levels, that are not compatible with standard frequency to time mapping implementations. The recently developed free-space angular chirp enhanced delay (FACED) removes many of these limitations, and offers a linear frequency to time mapping in any wavelength region where high-reflectivity mirrors and diffractive optics are available. In this work, we present a detailed formulation of the optical transfer function of a FACED device. Experimentally, we verify the properties of this transfer function, and then present simple guidelines to guarantee the correct operation of a FACED frequency to time measurement. We also experimentally demonstrate the real-time spectral analysis of femtosecond and picosecond pulses using this system.
140 - Daan Stellinga 2021
Time-of-flight (ToF) 3D imaging has a wealth of applications, from industrial inspection to movement tracking and gesture recognition. Depth information is recovered by measuring the round-trip flight time of laser pulses, which usually requires proj ection and collection optics with diameters of several centimetres. In this work we shrink this requirement by two orders of magnitude, and demonstrate near video-rate 3D imaging through multimode optical fibres (MMFs) - the width of a strand of human hair. Unlike conventional imaging systems, MMFs exhibit exceptionally complex light transport resembling that of a highly scattering medium. To overcome this complication, we implement high-speed aberration correction using wavefront shaping synchronised with a pulsed laser source, enabling random-access scanning of the scene at a rate of $sim$23,000 points per second. Using non-ballistic light we image moving objects several metres beyond the end of a $sim$40 cm long MMF of 50$mu$m core diameter, with millimetric depth resolution, at frame-rates of $sim$5Hz. Our work extends far-field depth resolving capabilities to ultra-thin micro-endoscopes, and will have a broad range of applications to clinical and remote inspection scenarios.
This paper presents a field-programmable gate array (FPGA) design of a segmentation algorithm based on convolutional neural network (CNN) that can process light detection and ranging (LiDAR) data in real-time. For autonomous vehicles, drivable region segmentation is an essential step that sets up the static constraints for planning tasks. Traditional drivable region segmentation algorithms are mostly developed on camera data, so their performance is susceptible to the light conditions and the qualities of road markings. LiDAR sensors can obtain the 3D geometry information of the vehicle surroundings with high precision. However, it is a computational challenge to process a large amount of LiDAR data in real-time. In this paper, a convolutional neural network model is proposed and trained to perform semantic segmentation using data from the LiDAR sensor. An efficient hardware architecture is proposed and implemented on an FPGA that can process each LiDAR scan in 17.59 ms, which is much faster than the previous works. Evaluated using Ford and KITTI road detection benchmarks, the proposed solution achieves both high accuracy in performance and real-time processing in speed.
Visualizing ultrafast dynamics at the atomic scale requires time-resolved pump-probe characterization with femtosecond temporal resolution. For single-shot ultrafast electron diffraction (UED) with fully relativistic electron bunch probes, existing t echniques are limited by the achievable electron probe bunch length, charge, and timing jitter. We present the first experimental demonstration of pump-probe UED with THz-driven compression and time-stamping that enable UED probes with unprecedented temporal resolution. This technique utilizes two counter-propagating quasi-single-cycle THz pulses generated from two OH-1 organic crystals coupled into an optimized THz compressor structure. Ultrafast dynamics of photoexcited bismuth films show an improved temporal resolution from 178 fs down to 85 fs when the THz-compressed UED probes are used with no time-stamping correction. Furthermore, we use a novel time-stamping technique to reveal transient oscillations in the dynamical response of THz-excited single-crystal gold films previously inaccessible by standard UED, achieving a time-stamped temporal resolution down to 5 fs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا