Do you want to publish a course? Click here

Deep learning-based holographic polarization microscopy

413   0   0.0 ( 0 )
 Added by Aydogan Ozcan
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Polarized light microscopy provides high contrast to birefringent specimen and is widely used as a diagnostic tool in pathology. However, polarization microscopy systems typically operate by analyzing images collected from two or more light paths in different states of polarization, which lead to relatively complex optical designs, high system costs or experienced technicians being required. Here, we present a deep learning-based holographic polarization microscope that is capable of obtaining quantitative birefringence retardance and orientation information of specimen from a phase recovered hologram, while only requiring the addition of one polarizer/analyzer pair to an existing holographic imaging system. Using a deep neural network, the reconstructed holographic images from a single state of polarization can be transformed into images equivalent to those captured using a single-shot computational polarized light microscope (SCPLM). Our analysis shows that a trained deep neural network can extract the birefringence information using both the sample specific morphological features as well as the holographic amplitude and phase distribution. To demonstrate the efficacy of this method, we tested it by imaging various birefringent samples including e.g., monosodium urate (MSU) and triamcinolone acetonide (TCA) crystals. Our method achieves similar results to SCPLM both qualitatively and quantitatively, and due to its simpler optical design and significantly larger field-of-view, this method has the potential to expand the access to polarization microscopy and its use for medical diagnosis in resource limited settings.



rate research

Read More

Fluorescence microscopy has enabled a dramatic development in modern biology by visualizing biological organisms with micrometer scale resolution. However, due to the diffraction limit, sub-micron/nanometer features are difficult to resolve. While various super-resolution techniques are developed to achieve nanometer-scale resolution, they often either require expensive optical setup or specialized fluorophores. In recent years, deep learning has shown the potentials to reduce the technical barrier and obtain super-resolution from diffraction-limited images. For accurate results, conventional deep learning techniques require thousands of images as a training dataset. Obtaining large datasets from biological samples is not often feasible due to the photobleaching of fluorophores, phototoxicity, and dynamic processes occurring within the organism. Therefore, achieving deep learning-based super-resolution using small datasets is challenging. We address this limitation with a new convolutional neural network-based approach that is successfully trained with small datasets and achieves super-resolution images. We captured 750 images in total from 15 different field-of-views as the training dataset to demonstrate the technique. In each FOV, a single target image is generated using the super-resolution radial fluctuation method. As expected, this small dataset failed to produce a usable model using traditional super-resolution architecture. However, using the new approach, a network can be trained to achieve super-resolution images from this small dataset. This deep learning model can be applied to other biomedical imaging modalities such as MRI and X-ray imaging, where obtaining large training datasets is challenging.
Data modeling and reduction for in situ is important. Feature-driven methods for in situ data analysis and reduction are a priority for future exascale machines as there are currently very few such methods. We investigate a deep-learning based workflow that targets in situ data processing using autoencoders. We propose a Residual Autoencoder integrated Residual in Residual Dense Block (RRDB) to obtain better performance. Our proposed framework compressed our test data into 66 KB from 2.1 MB per 3D volume timestep.
160 - Michael Atlan 2008
We report experimental results on heterodyne holographic microscopy of subwavelength-sized gold particles. The apparatus uses continuous green laser illumination of the metal beads in a total internal reflection configuration for dark-field operation. Detection of the scattered light at the illumination wavelength on a charge-coupled device array detector enables 3D localization of brownian particles in water
In this study, we propose a tailored DL framework for patient-specific performance that leverages the behavior of a model intentionally overfitted to a patient-specific training dataset augmented from the prior information available in an ART workflow - an approach we term Intentional Deep Overfit Learning (IDOL). Implementing the IDOL framework in any task in radiotherapy consists of two training stages: 1) training a generalized model with a diverse training dataset of N patients, just as in the conventional DL approach, and 2) intentionally overfitting this general model to a small training dataset-specific the patient of interest (N+1) generated through perturbations and augmentations of the available task- and patient-specific prior information to establish a personalized IDOL model. The IDOL framework itself is task-agnostic and is thus widely applicable to many components of the ART workflow, three of which we use as a proof of concept here: the auto-contouring task on re-planning CTs for traditional ART, the MRI super-resolution (SR) task for MRI-guided ART, and the synthetic CT (sCT) reconstruction task for MRI-only ART. In the re-planning CT auto-contouring task, the accuracy measured by the Dice similarity coefficient improves from 0.847 with the general model to 0.935 by adopting the IDOL model. In the case of MRI SR, the mean absolute error (MAE) is improved by 40% using the IDOL framework over the conventional model. Finally, in the sCT reconstruction task, the MAE is reduced from 68 to 22 HU by utilizing the IDOL framework.
407 - Yi Luo , Yichen Wu , Liqiao Li 2021
Various volatile aerosols have been associated with adverse health effects; however, characterization of these aerosols is challenging due to their dynamic nature. Here we present a method that directly measures the volatility of particulate matter (PM) using computational microscopy and deep learning. This method was applied to aerosols generated by electronic cigarettes (e-cigs), which vaporize a liquid mixture (e-liquid) that mainly consists of propylene glycol (PG), vegetable glycerin (VG), nicotine, and flavoring compounds. E-cig generated aerosols were recorded by a field-portable computational microscope, using an impaction-based air sampler. A lensless digital holographic microscope inside this mobile device continuously records the inline holograms of the collected particles. A deep learning-based algorithm is used to automatically reconstruct the microscopic images of e-cig generated particles from their holograms, and rapidly quantify their volatility. To evaluate the effects of e-liquid composition on aerosol dynamics, we measured the volatility of the particles generated by flavorless, nicotine-free e-liquids with various PG/VG volumetric ratios, revealing a negative correlation between the particles volatility and the volumetric ratio of VG in the e-liquid. For a given PG/VG composition, the addition of nicotine dominated the evaporation dynamics of the e-cig aerosol and the aforementioned negative correlation was no longer observed. We also revealed that flavoring additives in e-liquids significantly decrease the volatility of e-cig aerosol. The presented holographic volatility measurement technique and the associated mobile device might provide new insights on the volatility of e-cig generated particles and can be applied to characterize various volatile PM.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا