ترغب بنشر مسار تعليمي؟ اضغط هنا

Federated Whole Prostate Segmentation in MRI with Personalized Neural Architectures

121   0   0.0 ( 0 )
 نشر من قبل Holger Roth
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Building robust deep learning-based models requires diverse training data, ideally from several sources. However, these datasets cannot be combined easily because of patient privacy concerns or regulatory hurdles, especially if medical data is involved. Federated learning (FL) is a way to train machine learning models without the need for centralized datasets. Each FL client trains on their local data while only sharing model parameters with a global server that aggregates the parameters from all clients. At the same time, each clients data can exhibit differences and inconsistencies due to the local variation in the patient population, imaging equipment, and acquisition protocols. Hence, the federated learned models should be able to adapt to the local particularities of a clients data. In this work, we combine FL with an AutoML technique based on local neural architecture search by training a supernet. Furthermore, we propose an adaptation scheme to allow for personalized model architectures at each FL clients site. The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models performance after adaptation through selecting an optimal path through the AutoML supernet.

قيم البحث

اقرأ أيضاً

The segmentation of prostate whole gland and transition zone in Diffusion Weighted MRI (DWI) are the first step in designing computer-aided detection algorithms for prostate cancer. However, variations in MRI acquisition parameters and scanner manufa cturing result in different appearances of prostate tissue in the images. Convolutional neural networks (CNNs) which have shown to be successful in various medical image analysis tasks including segmentation are typically sensitive to the variations in imaging parameters. This sensitivity leads to poor segmentation performance of CNNs trained on a source cohort and tested on a target cohort from a different scanner and hence, it limits the applicability of CNNs for cross-cohort training and testing. Contouring prostate whole gland and transition zone in DWI images are time-consuming and expensive. Thus, it is important to enable CNNs pretrained on images of source domain, to segment images of target domain with minimum requirement for manual segmentation of images from the target domain. In this work, we propose a transfer learning method based on a modified U-net architecture and loss function, for segmentation of prostate whole gland and transition zone in DWIs using a CNN pretrained on a source dataset and tested on the target dataset. We explore the effect of the size of subset of target dataset used for fine-tuning the pre-trained CNN on the overall segmentation accuracy. Our results show that with a fine-tuning data as few as 30 patients from the target domain, the proposed transfer learning-based algorithm can reach dice score coefficient of 0.80 for both prostate whole gland and transition zone segmentation. Using a fine-tuning data of 115 patients from the target domain, dice score coefficient of 0.85 and 0.84 are achieved for segmentation of whole gland and transition zone, respectively, in the target domain.
84 - Quande Liu , Qi Dou , Lequan Yu 2020
Automated prostate segmentation in MRI is highly demanded for computer-assisted diagnosis. Recently, a variety of deep learning methods have achieved remarkable progress in this task, usually relying on large amounts of training data. Due to the natu re of scarcity for medical images, it is important to effectively aggregate data from multiple sites for robust model training, to alleviate the insufficiency of single-site samples. However, the prostate MRIs from different sites present heterogeneity due to the differences in scanners and imaging protocols, raising challenges for effective ways of aggregating multi-site data for network training. In this paper, we propose a novel multi-site network (MS-Net) for improving prostate segmentation by learning robust representations, leveraging multiple sources of data. To compensate for the inter-site heterogeneity of different MRI datasets, we develop Domain-Specific Batch Normalization layers in the network backbone, enabling the network to estimate statistics and perform feature normalization for each site separately. Considering the difficulty of capturing the shared knowledge from multiple datasets, a novel learning paradigm, i.e., Multi-site-guided Knowledge Transfer, is proposed to enhance the kernels to extract more generic representations from multi-site data. Extensive experiments on three heterogeneous prostate MRI datasets demonstrate that our MS-Net improves the performance across all datasets consistently, and outperforms state-of-the-art methods for multi-site learning.
Accurate and robust whole heart substructure segmentation is crucial in developing clinical applications, such as computer-aided diagnosis and computer-aided surgery. However, segmentation of different heart substructures is challenging because of in adequate edge or boundary information, the complexity of the background and texture, and the diversity in different substructures sizes and shapes. This article proposes a framework for multi-class whole heart segmentation employing non-rigid registration-based probabilistic atlas incorporating the Bayesian framework. We also propose a non-rigid registration pipeline utilizing a multi-resolution strategy for obtaining the highest attainable mutual information between the moving and fixed images. We further incorporate non-rigid registration into the expectation-maximization algorithm and implement different deep convolutional neural network-based encoder-decoder networks for ablation studies. All the extensive experiments are conducted utilizing the publicly available dataset for the whole heart segmentation containing 20 MRI and 20 CT cardiac images. The proposed approach exhibits an encouraging achievement, yielding a mean volume overlapping error of 14.5 % for CT scans exceeding the state-of-the-art results by a margin of 1.3 % in terms of the same metric. As the proposed approach provides better-results to delineate the different substructures of the heart, it can be a medical diagnostic aiding tool for helping experts with quicker and more accurate results.
Sequential whole-body 18F-Fluorodeoxyglucose (FDG) positron emission tomography (PET) scans are regarded as the imaging modality of choice for the assessment of treatment response in the lymphomas because they detect treatment response when there may not be changes on anatomical imaging. Any computerized analysis of lymphomas in whole-body PET requires automatic segmentation of the studies so that sites of disease can be quantitatively monitored over time. State-of-the-art PET image segmentation methods are based on convolutional neural networks (CNNs) given their ability to leverage annotated datasets to derive high-level features about the disease process. Such methods, however, focus on PET images from a single time-point and discard information from other scans or are targeted towards specific organs and cannot cater for the multiple structures in whole-body PET images. In this study, we propose a spatio-temporal dual-stream neural network (ST-DSNN) to segment sequential whole-body PET scans. Our ST-DSNN learns and accumulates image features from the PET images done over time. The accumulated image features are used to enhance the organs / structures that are consistent over time to allow easier identification of sites of active lymphoma. Our results show that our method outperforms the state-of-the-art PET image segmentation methods.
We propose HookNet, a semantic segmentation model for histopathology whole-slide images, which combines context and details via multiple branches of encoder-decoder convolutional neural networks. Concentricpatches at multiple resolutions with differe nt fields of view are used to feed different branches of HookNet, and intermediate representations are combined via a hooking mechanism. We describe a framework to design and train HookNet for achieving high-resolution semantic segmentation and introduce constraints to guarantee pixel-wise alignment in feature maps during hooking. We show the advantages of using HookNet in two histopathology image segmentation tasks where tissue type prediction accuracy strongly depends on contextual information, namely (1) multi-class tissue segmentation in breast cancer and, (2) segmentation of tertiary lymphoid structures and germinal centers in lung cancer. Weshow the superiority of HookNet when compared with single-resolution U-Net models working at different resolutions as well as with a recently published multi-resolution model for histopathology image segmentation
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا