ترغب بنشر مسار تعليمي؟ اضغط هنا

Single-shot Compressed 3D Imaging by Exploiting Random Scattering and Astigmatism

60   0   0.0 ( 0 )
 نشر من قبل Qiong Gao Dr.
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Based on point spread function (PSF) engineering and astigmatism due to a pair of cylindrical lenses, a novel compressed imaging mechanism is proposed to achieve single-shot incoherent 3D imaging. The speckle-like PSF of the imaging system is sensitive to axial shift, which makes it feasible to reconstruct a 3D image by solving an optimization problem with sparsity constraint. With the experimentally calibrated PSFs, the proposed method is demonstrated by a synthetic 3D point object and real 3D object, and the images in different axial slices can be reconstructed faithfully. Moreover, 3D multispectral compressed imaging is explored with the same system, and the result is rather satisfactory with a synthetic point object. Because of the inherent compatibility between the compression in spectral and axial dimensions, the proposed mechanism has the potential to be a unified framework for multi-dimensional compressed imaging.

قيم البحث

اقرأ أيضاً

Fluorescence imaging is indispensable to biology and neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes / miniscopes are inherently constrained by their limited space-bandwidth-product, shallow depth-of-field, and the inability to resolve 3D distributed emitters. Here, we present a Computational Miniature Mesoscope (CM$^2$) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 $times$ 7-mm$^2$ field-of-view and 2.5-mm depth-of-field, achieving 7-$mu$m lateral resolution and better than 200-$mu$m axial resolution. Notably, the CM$^2$ has a compact lightweight design that integrates a microlens array for imaging and an LED array for excitation in a single platform. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers. We further quantify the effects of bulk scattering and background fluorescence on phantom experiments.
Multispectral imaging plays an important role in many applications from astronomical imaging, earth observation to biomedical imaging. However, the current technologies are complex with multiple alignment-sensitive components, predetermined spatial a nd spectral parameters by manufactures. Here, we demonstrate a single-shot multispectral imaging technique that gives flexibility to end-users with a very simple optical setup, thank to spatial correlation and spectral decorrelation of speckle patterns. These seemingly random speckle patterns are point spreading functions (PSFs) generated by light from point sources propagating through a strongly scattering medium. The spatial correlation of PSFs allows image recovery with deconvolution techniques, while the spectral decorrelation allows them to play the role of tune-able spectral filters in the deconvolution process. Our demonstrations utilizing optical physics of strongly scattering media and computational imaging present the most cost-effective approach for multispectral imaging with great advantages.
On-invasive optical imaging techniques are essential diagnostic tools in many fields. Although various recent methods have been proposed to utilize and control light in multiple scattering media, non-invasive optical imaging through and inside scatte ring layers across a large field of view remains elusive due to the physical limits set by the optical memory effect, especially without wavefront shaping techniques. Here, we demonstrate an approach that enables non-invasive fluorescence imaging behind scattering layers with field-of-views extending well beyond the optical memory effect. The method consists in demixing the speckle patterns emitted by a fluorescent object under variable unknown random illumination, using matrix factorization and a novel fingerprint-based reconstruction. Experimental validation shows the efficiency and robustness of the method with various fluorescent samples, covering a field of view up to three times the optical memory effect range. Our non-invasive imaging technique is simple, neither requires a spatial light modulator nor a guide star, and can be generalized to a wide range of incoherent contrast mechanisms and illumination schemes.
One-shot spectral imaging that can obtain spectral information from different points in space at one time has always been difficult to achieve, and is extremely important for both fundamental scientific research and various practical applications. In this study, one-shot ultraspectral imaging by fitting thousands of micro-spectrometers on a chip, is proposed and demonstrated. Exotic light modulation is achieved by using a reconfigurable metasurface supercell, which enables 155,216 image-adaptive micro-spectrometers, simultaneously guaranteeing the spectral-pixel density and reconstructed spectral quality. By constructing a compressive-sensing algorithm, the device can reconstruct ultraspectral imaging ($Deltalambda$/$lambda$~0.001) covering a 300-nm-wide visible spectrum with an ultra-high center-wavelength accuracy of 0.04-nm standard deviation and spectral resolution of 0.8 nm. This scheme can be extended to almost any commercial camera with different spectral bands to seamlessly switch between image and spectral image, and opens up a new space for the application of spectral analysis combining with image recognition and intellisense.
Under weak illumination, tracking and imaging moving object turns out to be hard. By spatially collecting the signal, single pixel imaging schemes promise the capability of image reconstruction from low photon flux. However, due to the requirement on large number of samplings, how to clearly image moving objects is an essential problem for such schemes. Here we present a principle of single pixel tracking and imaging method. Velocity vector of the object is obtained from temporal correlation of the bucket signals in a typical computational ghost imaging system. Then the illumination beam is steered accordingly. Taking the velocity into account, both trajectory and clear image of the object are achieved during its evolution. Since tracking is achieved with bucket signals independently, this scheme is valid for capturing moving object as fast as its displacement within the interval of every sampling keeps larger than the resolution of the optical system. Experimentally, our method works well with the average number of detected photons down to 1.88 photons/speckle.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا