Do you want to publish a course? Click here

Extracting quantitative biological information from brightfield cell images using deep learning

122   0   0.0 ( 0 )
 Added by Giovanni Volpe
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Quantitative analysis of cell structures is essential for biomedical and pharmaceutical research. The standard imaging approach relies on fluorescence microscopy, where cell structures of interest are labeled by chemical staining techniques. However, these techniques are often invasive and sometimes even toxic to the cells, in addition to being time-consuming, labor-intensive, and expensive. Here, we introduce an alternative deep-learning-powered approach based on the analysis of brightfield images by a conditional generative adversarial neural network (cGAN). We show that this approach can extract information from the brightfield images to generate virtually-stained images, which can be used in subsequent downstream quantitative analyses of cell structures. Specifically, we train a cGAN to virtually stain lipid droplets, cytoplasm, and nuclei using brightfield images of human stem-cell-derived fat cells (adipocytes), which are of particular interest for nanomedicine and vaccine development. Subsequently, we use these virtually-stained images to extract quantitative measures about these cell structures. Generating virtually-stained fluorescence images is less invasive, less expensive, and more reproducible than standard chemical staining; furthermore, it frees up the fluorescence microscopy channels for other analytical probes, thus increasing the amount of information that can be extracted from each cell.



rate research

Read More

We present a machine-learning approach to classifying the phases of surface wave dispersion curves. Standard FTAN analysis of surfaces observed on an array of receivers is converted to an image, of which, each pixel is classified as fundamental mode, first overtone, or noise. We use a convolutional neural network (U-net) architecture with a supervised learning objective and incorporate transfer learning. The training is initially performed with synthetic data to learn coarse structure, followed by fine-tuning of the network using approximately 10% of the real data based on human classification. The results show that the machine classification is nearly identical to the human picked phases. Expanding the method to process multiple images at once did not improve the performance. The developed technique will faciliate automated processing of large dispersion curve datasets.
73 - Fubao Zhu 2020
Objectives: Precise segmentation of total extraocular muscles (EOM) and optic nerve (ON) is essential to assess anatomical development and progression of thyroid-associated ophthalmopathy (TAO). We aim to develop a semantic segmentation method based on deep learning to extract the total EOM and ON from orbital CT images in patients with suspected TAO. Materials and Methods: A total of 7,879 images obtained from 97 subjects who underwent orbit CT scans due to suspected TAO were enrolled in this study. Eighty-eight patients were randomly selected into the training/validation dataset, and the rest were put into the test dataset. Contours of the total EOM and ON in all the patients were manually delineated by experienced radiologists as the ground truth. A three-dimensional (3D) end-to-end fully convolutional neural network called semantic V-net (SV-net) was developed for our segmentation task. Intersection over Union (IoU) was measured to evaluate the accuracy of the segmentation results, and Pearson correlation analysis was used to evaluate the volumes measured from our segmentation results against those from the ground truth. Results: Our model in the test dataset achieved an overall IoU of 0.8207; the IoU was 0.7599 for the superior rectus muscle, 0.8183 for the lateral rectus muscle, 0.8481 for the medial rectus muscle, 0.8436 for the inferior rectus muscle and 0.8337 for the optic nerve. The volumes measured from our segmentation results agreed well with those from the ground truth (all R>0.98, P<0.0001). Conclusion: The qualitative and quantitative evaluations demonstrate excellent performance of our method in automatically extracting the total EOM and ON and measuring their volumes in orbital CT images. There is a great promise for clinical application to assess these anatomical structures for the diagnosis and prognosis of TAO.
71 - Yu Deng , Ling Wang , Chen Zhao 2021
Automatic CT segmentation of proximal femur is crucial for the diagnosis and risk stratification of orthopedic diseases; however, current methods for the femur CT segmentation mainly rely on manual interactive segmentation, which is time-consuming and has limitations in both accuracy and reproducibility. In this study, we proposed an approach based on deep learning for the automatic extraction of the periosteal and endosteal contours of proximal femur in order to differentiate cortical and trabecular bone compartments. A three-dimensional (3D) end-to-end fully convolutional neural network, which can better combine the information between neighbor slices and get more accurate segmentation results, was developed for our segmentation task. 100 subjects aged from 50 to 87 years with 24,399 slices of proximal femur CT images were enrolled in this study. The separation of cortical and trabecular bone derived from the QCT software MIAF-Femur was used as the segmentation reference. We randomly divided the whole dataset into a training set with 85 subjects for 10-fold cross-validation and a test set with 15 subjects for evaluating the performance of models. Two models with the same network structures were trained and they achieved a dice similarity coefficient (DSC) of 97.87% and 96.49% for the periosteal and endosteal contours, respectively. To verify the excellent performance of our model for femoral segmentation, we measured the volume of different parts of the femur and compared it with the ground truth and the relative errors between predicted result and ground truth are all less than 5%. It demonstrated a strong potential for clinical use, including the hip fracture risk prediction and finite element analysis.
Quantitative measures of uptake in caudate, putamen, and globus pallidus in dopamine transporter (DaT) brain SPECT have potential as biomarkers for the severity of Parkinson disease. Reliable quantification of uptake requires accurate segmentation of these regions. However, segmentation is challenging in DaT SPECT due to partial-volume effects, system noise, physiological variability, and the small size of these regions. To address these challenges, we propose an estimation-based approach to segmentation. This approach estimates the posterior mean of the fractional volume occupied by caudate, putamen, and globus pallidus within each voxel of a 3D SPECT image. The estimate is obtained by minimizing a cost function based on the binary cross-entropy loss between the true and estimated fractional volumes over a population of SPECT images, where the distribution of the true fractional volumes is obtained from magnetic resonance images from clinical populations. The proposed method accounts for both the sources of partial-volume effects in SPECT, namely the limited system resolution and tissue-fraction effects. The method was implemented using an encoder-decoder network and evaluated using realistic clinically guided SPECT simulation studies, where the ground-truth fractional volumes were known. The method significantly outperformed all other considered segmentation methods and yielded accurate segmentation with dice similarity coefficients of ~ 0.80 for all regions. The method was relatively insensitive to changes in voxel size. Further, the method was relatively robust up to +/- 10 degrees of patient head tilt along transaxial, sagittal, and coronal planes. Overall, the results demonstrate the efficacy of the proposed method to yield accurate fully automated segmentation of caudate, putamen, and globus pallidus in 3D DaT-SPECT images.
197 - Yan Wu1 , 2 , + 2021
Magnetic resonance imaging (MRI) offers superior soft tissue contrast and is widely used in biomedicine. However, conventional MRI is not quantitative, which presents a bottleneck in image analysis and digital healthcare. Typically, additional scans are required to disentangle the effect of multiple parameters of MR and extract quantitative tissue properties. Here we investigate a data-driven strategy Q^2 MRI (Qualitative and Quantitative MRI) to derive quantitative parametric maps from standard MR images without additional data acquisition. By taking advantage of the interdependency between various MRI parametric maps buried in training data, the proposed deep learning strategy enables accurate prediction of tissue relaxation properties as well as other biophysical and biochemical characteristics from a single or a few images with conventional T_1/T_2 weighting. Superior performance has been achieved in quantitative MR imaging of the knee and liver. Q^2 MRI promises to provide a powerful tool for a variety of biomedical applications and facilitate the next generation of digital medicine.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا