ترغب بنشر مسار تعليمي؟ اضغط هنا

Recognizing three-dimensional phase images with deep learning

92   0   0.0 ( 0 )
 نشر من قبل Weiru Fan
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Optical phase contains key information for biomedical and astronomical imaging. However, it is often obscured by layers of heterogeneous and scattering media, which render optical phase imaging at different depths an utmost challenge. Limited by the memory effect, current methods for phase imaging in strong scattering media are inapplicable to retrieving phases at different depths. To address this challenge, we developed a speckle three-dimensional reconstruction network (STRN) to recognize phase objects behind scattering media, which circumvents the limitations of memory effect. From the single-shot, reference-free and scanning-free speckle pattern input, STRN distinguishes depth-resolving quantitative phase information with high fidelity. Our results promise broad applications in biomedical tomography and endoscopy.

قيم البحث

اقرأ أيضاً

Detection of phase variations across optically transparent samples is often a difficult task. We propose and demonstrate a compact, lightweight and low cost quantitative phase contrast imager. Light diffracted from a pinhole is incident on a thick ob ject and the modulated light is collected by an image sensor and the intensity pattern is recorded. Two optical configurations namely lens-based and lensless cases are compared. A modified phase-retrieval algorithm is implemented to extract the phase information of the sample at different axial planes from a single camera shot.
291 - Fei Shan , Yaozong Gao , Jun Wang 2020
CT imaging is crucial for diagnosis, assessment and staging COVID-19 infection. Follow-up scans every 3-5 days are often recommended for disease progression. It has been reported that bilateral and peripheral ground glass opacification (GGO) with or without consolidation are predominant CT findings in COVID-19 patients. However, due to lack of computerized quantification tools, only qualitative impression and rough description of infected areas are currently used in radiological reports. In this paper, a deep learning (DL)-based segmentation system is developed to automatically quantify infection regions of interest (ROIs) and their volumetric ratios w.r.t. the lung. The performance of the system was evaluated by comparing the automatically segmented infection regions with the manually-delineated ones on 300 chest CT scans of 300 COVID-19 patients. For fast manual delineation of training samples and possible manual intervention of automatic results, a human-in-the-loop (HITL) strategy has been adopted to assist radiologists for infection region segmentation, which dramatically reduced the total segmentation time to 4 minutes after 3 iterations of model updating. The average Dice simiarility coefficient showed 91.6% agreement between automatic and manual infaction segmentations, and the mean estimation error of percentage of infection (POI) was 0.3% for the whole lung. Finally, possible applications, including but not limited to analysis of follow-up CT scans and infection distributions in the lobes and segments correlated with clinical findings, were discussed.
Wavefront sensing and reconstruction are widely used for adaptive optics, aberration correction, and high-resolution optical phase imaging. Traditionally, interference and/or microlens arrays are used to convert the optical phase into intensity varia tion. Direct imaging of distorted wavefront usually results in complicated phase retrieval with low contrast and low sensitivity. Here, a novel approach has been developed and experimentally demonstrated based on the phase-sensitive information encoded into second harmonic signals, which are intrinsically sensitive to wavefront modulations. By designing and implementing a deep neural network, we demonstrate the second harmonic imaging enhanced by deep learning decipher (SHIELD) for efficient and resilient phase retrieval. Inheriting the advantages of two-photon microscopy, SHIELD demonstrates single-shot, reference-free, and video-rate phase imaging with sensitivity better than {lambda}/100 and high robustness against noises, facilitating numerous applications from biological imaging to wavefront sensing.
This work focuses on the generation of far-field super-resolved pure-azimuthal focal field based on the fast Fourier transform. A self-designed differential filter is first pioneered to robustly reconfigure a doughnut-shaped azimuthal focal field int o a bright one with a sub-wavelength lateral scale (0.392{lambda}), which offers a 27.3% reduction ratio relative to that of tightly focused azimuthal polarization modulated by a spiral phase plate. By further uniting the versatile differential filter with spatially shifted beam approach, in addition to allowing for an extremely sharper focal spot, whose size is in turn reduced to 0.228{lambda} and 0.286{lambda} in the transverse as well as axial directions, the parasitic sidelobes are also lowered to an inessential level (< 20%), thereby enabling an excellent three-dimensional deep-subwavelength focal field ({lambda}3/128). The relevant phase profiles are further exhibited to unravel the annihilation of field singularity and locally linear (i.e. azimuthal) polarization. Our scheme opens a promising route toward efficiently steer and tailor the redistribution of the focal field.
Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا