No Arabic abstract
Single-photon light detection and ranging (LiDAR), offering single-photon sensitivity and picosecond time resolution, has been widely adopted for active imaging applications. Long-range active imaging is a great challenge, because the spatial resolution degrades significantly with the imaging range due to the diffraction limit of the optics, and only weak echo signal photons can return but mixed with a strong background noise. Here we propose and demonstrate a photon-efficient LiDAR approach that can achieve sub-Rayleigh resolution imaging over long ranges. This approach exploits fine sub-pixel scanning and a deconvolution algorithm tailored to this long-range application. Using this approach, we experimentally demonstrated active three-dimensional (3D) single-photon imaging by recognizing different postures of a mannequin model at a stand-off distance of 8.2 km in both daylight and night. The observed spatial (transversal) resolution is about 5.5 cm at 8.2 km, which is about twice of the systems resolution. This also beats the optical systems Rayleigh criterion. The results are valuable for geosciences and target recognition over long ranges.
Long-range active imaging has widespread applications in remote sensing and target recognition. Single-photon light detection and ranging (lidar) has been shown to have high sensitivity and temporal resolution. On the application front, however, the operating range of practical single-photon lidar systems is limited to about tens of kilometers over the Earths atmosphere, mainly due to the weak echo signal mixed with high background noise. Here, we present a compact coaxial single-photon lidar system capable of realizing 3D imaging at up to 201.5 km. It is achieved by using high-efficiency optical devices for collection and detection, and what we believe is a new noise-suppression technique that is efficient for long-range applications. We show that photon-efficient computational algorithms enable accurate 3D imaging over hundreds of kilometers with as few as 0.44 signal photons per pixel. The results represent a significant step toward practical, low-power lidar over extra-long ranges.
Mid-wave infrared (MWIR) cameras for large number pixels are extremely expensive compared with their counterparts in visible light, thus, super-resolution imaging (SRI) for MWIR by increasing imaging pixels has always been a research hotspot in recent years. Over the last decade, with the extensively investigation of the compressed sensing (CS) method, focal plane array (FPA) based compressive imaging in MWIR developed rapidly for SRI. This paper presents a long-distance super-resolution FPA compressive imaging in MWIR with improved calibration method and imaging effect. By the use of CS, we measure and calculate the calibration matrix of optical system efficiently and precisely, which improves the imaging contrast and signal-to-noise ratio(SNR) compared with previous work. We also achieved the 4x4 times super-resolution reconstruction of the long-distance objects which reaches the limit of the system design in our experiment.
We present a scheme for the nondestructive and ultra-sensitive imaging of Rydberg atoms within an ensemble of cold probe atoms. This is made possible by the interaction-enhanced electromagnetically induced transparency at off-resonance which enables an extremely narrow zero-absorption window for an enhanced 100$%$ transmission. By probing the transmission rate we obtain the distribution of Rydberg atoms with both ultra-high spatial resolution and fast response, ensuring a precise real-time imaging. Increased resolution compared to previous work allows us to accurately obtain the information of atom position at the nanometer scale via adjusting the probe detuning only. This new type of interaction enhanced transmission imaging can be utilized to other impure systems containing strong many-body interactions, and is promising to develop nanoscale super-resolution microscopy.
In order to increase signal-to-noise ratio in measurement, most imaging detectors sacrifice resolution to increase pixel size in confined area. Although the pixel super-resolution technique (PSR) enables resolution enhancement in such as digital holographic imaging, it suffers from unsatisfied reconstruction quality. In this work, we report a high-fidelity plug-and-play optimization method for PSR phase retrieval, termed as PNP-PSR. It decomposes PSR reconstruction into independent sub-problems based on the generalized alternating projection framework. An alternating projection operator and an enhancing neural network are derived to tackle the measurement fidelity and statistical prior regularization, respectively. In this way, PNP-PSR incorporates the advantages of individual operators, achieving both high efficiency and noise robustness. We compare PNP-PSR with the existing PSR phase retrieval algorithms with a series of simulations and experiments, and PNP-PSR outperforms the existing algorithms with as much as 11dB on PSNR. The enhanced imaging fidelity enables one-order-of-magnitude higher cell counting precision.
Deep Convolutional Neural Networks (CNN) have drawn great attention in image super-resolution (SR). Recently, visual attention mechanism, which exploits both of the feature importance and contextual cues, has been introduced to image SR and proves to be effective to improve CNN-based SR performance. In this paper, we make a thorough investigation on the attention mechanisms in a SR model and shed light on how simple and effective improvements on these ideas improve the state-of-the-arts. We further propose a unified approach called multi-grained attention networks (MGAN) which fully exploits the advantages of multi-scale and attention mechanisms in SR tasks. In our method, the importance of each neuron is computed according to its surrounding regions in a multi-grained fashion and then is used to adaptively re-scale the feature responses. More importantly, the channel attention and spatial attention strategies in previous methods can be essentially considered as two special cases of our method. We also introduce multi-scale dense connections to extract the image features at multiple scales and capture the features of different layers through dense skip connections. Ablation studies on benchmark datasets demonstrate the effectiveness of our method. In comparison with other state-of-the-art SR methods, our method shows the superiority in terms of both accuracy and model size.