ترغب بنشر مسار تعليمي؟ اضغط هنا

Miniscope3D: optimized single-shot miniature 3D fluorescence microscopy

184   0   0.0 ( 0 )
 نشر من قبل Kyrollos Yanny
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Miniature fluorescence microscopes are a standard tool in systems biology. However, widefield miniature microscopes capture only 2D information, and modifications that enable 3D capabilities increase the size and weight and have poor resolution outside a narrow depth range. Here, we achieve the 3D capability by replacing the tube lens of a conventional 2D Miniscope with an optimized multifocal phase mask at the objectives aperture stop. Placing the phase mask at the aperture stop significantly reduces the size of the device, and varying the focal lengths enables a uniform resolution across a wide depth range. The phase mask encodes the 3D fluorescence intensity into a single 2D measurement, and the 3D volume is recovered by solving a sparsity-constrained inverse problem. We provide methods for designing and fabricating the phase mask and an efficient forward model that accounts for the field-varying aberrations in miniature objectives. We demonstrate a prototype that is 17 mm tall and weighs 2.5 grams, achieving 2.76 $mu$m lateral, and 15 $mu$m axial resolution across most of the 900x700x390 $mu m^3$ volume at 40 volumes per second. The performance is validated experimentally on resolution targets, dynamic biological samples, and mouse brain tissue. Compared with existing miniature single-shot volume-capture implementations, our system is smaller and lighter and achieves a more than 2x better lateral and axial resolution throughout a 10x larger usable depth range. Our microscope design provides single-shot 3D imaging for applications where a compact platform matters, such as volumetric neural imaging in freely moving animals and 3D motion studies of dynamic samples in incubators and lab-on-a-chip devices.



قيم البحث

اقرأ أيضاً

Fluorescence imaging is indispensable to biology and neuroscience. The need for large-scale imaging in freely behaving animals has further driven the development in miniaturized microscopes (miniscopes). However, conventional microscopes / miniscopes are inherently constrained by their limited space-bandwidth-product, shallow depth-of-field, and the inability to resolve 3D distributed emitters. Here, we present a Computational Miniature Mesoscope (CM$^2$) that overcomes these bottlenecks and enables single-shot 3D imaging across an 8 $times$ 7-mm$^2$ field-of-view and 2.5-mm depth-of-field, achieving 7-$mu$m lateral resolution and better than 200-$mu$m axial resolution. Notably, the CM$^2$ has a compact lightweight design that integrates a microlens array for imaging and an LED array for excitation in a single platform. Its expanded imaging capability is enabled by computational imaging that augments the optics by algorithms. We experimentally validate the mesoscopic 3D imaging capability on volumetrically distributed fluorescent beads and fibers. We further quantify the effects of bulk scattering and background fluorescence on phantom experiments.
Light field microscopy (LFM) uses a microlens array (MLA) near the sensor plane of a microscope to achieve single-shot 3D imaging of a sample without any moving parts. Unfortunately, the 3D capability of LFM comes with a significant loss of lateral r esolution at the focal plane. Placing the MLA near the pupil plane of the microscope, instead of the image plane, can mitigate the artifacts and provide an efficient forward model, at the expense of field-of-view (FOV). Here, we demonstrate improved resolution across a large volume with Fourier DiffuserScope, which uses a diffuser in the pupil plane to encode 3D information, then computationally reconstructs the volume by solving a sparsity-constrained inverse problem. Our diffuser consists of randomly placed microlenses with varying focal lengths; the random positions provide a larger FOV compared to a conventional MLA, and the diverse focal lengths improve the axial depth range. To predict system performance based on diffuser parameters, we for the first time establish a theoretical framework and design guidelines, which are verified by numerical simulations, then build an experimental system that achieves $< 3$ um lateral and $4$ um axial resolution over a $1000 times 1000 times 280$ um$^3$ volume. Our diffuser design outperforms the MLA used in LFM, providing more uniform resolution over a larger volume, both laterally and axially.
Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organisms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While va rious super-resolution techniques are developed to achieve nanometer-scale resolution, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to the photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network-based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
Fluorescence lifetime imaging microscopy (FLIM) systems are limited by their slow processing speed, low signal-to-noise ratio (SNR), and expensive and challenging hardware setups. In this work, we demonstrate applying a denoising convolutional networ k to improve FLIM SNR. The network will be integrated with an instant FLIM system with fast data acquisition based on analog signal processing, high SNR using high-efficiency pulse-modulation, and cost-effective implementation utilizing off-the-shelf radio-frequency components. Our instant FLIM system simultaneously provides the intensity, lifetime, and phasor plots textit{in vivo} and textit{ex vivo}. By integrating image denoising using the trained deep learning model on the FLIM data, provide accurate FLIM phasor measurements are obtained. The enhanced phasor is then passed through the K-means clustering segmentation method, an unbiased and unsupervised machine learning technique to separate different fluorophores accurately. Our experimental textit{in vivo} mouse kidney results indicate that introducing the deep learning image denoising model before the segmentation effectively removes the noise in the phasor compared to existing methods and provides clearer segments. Hence, the proposed deep learning-based workflow provides fast and accurate automatic segmentation of fluorescence images using instant FLIM. The denoising operation is effective for the segmentation if the FLIM measurements are noisy. The clustering can effectively enhance the detection of biological structures of interest in biomedical imaging applications.
Fluorescence microscopy is an essential tool for the analysis of 3D subcellular structures in tissue. An important step in the characterization of tissue involves nuclei segmentation. In this paper, a two-stage method for segmentation of nuclei using convolutional neural networks (CNNs) is described. In particular, since creating labeled volumes manually for training purposes is not practical due to the size and complexity of the 3D data sets, the paper describes a method for generating synthetic microscopy volumes based on a spatially constrained cycle-consistent adversarial network. The proposed method is tested on multiple real microscopy data sets and outperforms other commonly used segmentation techniques.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا