Do you want to publish a course? Click here

A trillion frames per second: the techniques and applications of light-in-flight photography

80   0   0.0 ( 0 )
 Added by Daniele Faccio
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Cameras capable of capturing videos at a trillion frames per second allow to freeze light in motion, a very counterintuitive capability when related to our everyday experience in which light appears to travel instantaneously. By combining this capability with computational imaging techniques, new imaging opportunities emerge such as three dimensional imaging of scenes that are hidden behind a corner, the study of relativistic distortion effects, imaging through diffusive media and imaging of ultrafast optical processes such as laser ablation, supercontinuum and plasma generation. We provide an overview of the main techniques that have been developed for ultra-high speed photography with a particular focus on `light-in-flight imaging, i.e. applications where the key element is the imaging of light itself at frame rates that allow to freeze its motion and therefore extract information that would otherwise be blurred out and lost.



rate research

Read More

Ultrafast imaging is a powerful tool for studying space-time dynamics in photonic material, plasma physics, living cells, and neural activity. Pushing the imaging speed to the quantum limit could reveal extraordinary scenes about the questionable quantization of life and intelligence, or the wave-particle duality of light. However, previous designs of ultrafast photography are intrinsically limited by framing speed. Here, we introduce a new technique based on a multiple non-collinear optical parametric amplifier principle (MOPA), which readily push the frame rate into the area of ten trillion frames per second with higher spatial resolution than 30 line pairs per millimeter. The MOPA imaging is applied to record the femtosecond early evolution of laser-induced plasma grating in air for the first time. Our approach avoids the intrinsic limitations of previous methods, thus can be potentially optimized for higher speed and resolution, opening the way of approaching quantum limits to test the fundamentals.
Slow-light media are of interest in the context of quantum computing and enhanced measurement of quantum effects, with particular emphasis on using slow-light with single photons. We use light-in-flight imaging with a single photon avalanche diode camera-array to image in situ pulse propagation through a slow light medium consisting of heated rubidium vapour. Light-in-flight imaging of slow light propagation enables direct visualisation of a series of physical effects including simultaneous observation of spatial pulse compression and temporal pulse dispersion. Additionally, the single-photon nature of the camera allows for observation of the group velocity of single photons with measured single-photon fractional delays greater than 1 over 1 cm of propagation.
Light-in-flight (LIF) imaging is the measurement and reconstruction of lights path as it moves and interacts with objects. It is well known that relativistic effects can result in apparent velocities that differ significantly from the speed of light. However, less well known is that Rayleigh scattering and the effects of imaging optics can lead to observed intensities changing by several orders of magnitude along lights path. We develop a model that enables us to correct for all of these effects, thus we can accurately invert the observed data and reconstruct the true intensity-corrected optical path of a laser pulse as it travels in air. We demonstrate the validity of our model by observing the photon arrival time and intensity distribution obtained from single-photon avalanche detector (SPAD) array data for a laser pulse propagating towards and away from the camera. We can then reconstruct the true intensity-corrected path of the light in four dimensions (three spatial dimensions and time).
Laser based ranging (LiDAR) - already ubiquitously used in robotics, industrial monitoring, or geodesy - is a key sensor technology for future autonomous driving, and has been employed in nearly all successful implementations of autonomous vehicles to date. Coherent laser allows long-range detection, operates eye safe, is immune to crosstalk and yields simultaneous velocity and distance information. Yet for actual deployment in vehicles, video frame-rate requirements for object detection, classification and sensor fusion mandate megapixel per second measurement speed. Such pixel rates are not possible to attain with current coherent single laser-detector architectures at high definition range imagining, and make parallelization essential. A megapixel class coherent LiDAR has not been demonstrated, and is still impeded by the arduous requirements of large banks of detectors and digitizers on the receiver side, that need to be integrated on chip. Here we report hardware efficient coherent laser ranging at megapixel per second imaging rates. This is achieved using a novel concept for massively parallel coherent laser ranging that requires only a single laser and a single photoreceiver, yet achieves simultaneous recording of more than 64 channels with distance and velocity measurements each - attaining an unprecedented 5 megapixel per second rate. Heterodyning two offset chirped soliton microcombs on a single coherent receiver yields an interferogram containing both distance and velocity information of all particular channels, thereby alleviating the need to individually separate, detect and digitize distinct channels. The reported LiDAR implementation is hardware-efficient, compatible with photonic integration and demonstrates the significant advantages of acquisition speed, complexity and cost benefits afforded by the convergence of optical telecommunication and metrology technologies.
241 - Chang Liu , Xiaolin Wu 2021
Nighttime photographers are often troubled by light pollution of unwanted artificial lights. Artificial lights, after scattered by aerosols in the atmosphere, can inundate the starlight and degrade the quality of nighttime images, by reducing contrast and dynamic range and causing hazes. In this paper we develop a physically-based light pollution reduction (LPR) algorithm that can substantially alleviate the aforementioned degradations of perceptual quality and restore the pristine state of night sky. The key to the success of the proposed LPR algorithm is an inverse method to estimate the spatial radiance distribution and spectral signature of ground artificial lights. Extensive experiments are carried out to evaluate the efficacy and limitations of the LPR algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا