ترغب بنشر مسار تعليمي؟ اضغط هنا

Registration of pre-surgical MRI and whole-mount histopathology images in prostate cancer patients with radical prostatectomy via RAPSODI

101   0   0.0 ( 0 )
 نشر من قبل Mirabela Rusu
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Magnetic resonance imaging (MRI) has great potential to improve prostate cancer diagnosis. It can spare men with a normal exam from undergoing invasive biopsy while making biopsies more accurate in men with lesions suspicious for cancer. Yet, the subtle differences between cancer and confounding conditions, render the interpretation of MRI challenging. The tissue collected from patients that undergo pre-surgical MRI and radical prostatectomy provides a unique opportunity to correlate histopathology images of the entire prostate with MRI in order to accurately map the extent of prostate cancer onto MRI. Here, we introduce the RAPSODI (framework for the registration of radiology and pathology images. RAPSODI relies on a three-step procedure that first reconstructs in 3D the resected tissue using the serial whole-mount histopathology slices, then registers corresponding histopathology and MRI slices, and finally maps the cancer outlines from the histopathology slices onto MRI. We tested RAPSODI in a phantom study where we simulated various conditions, e.g., tissue specimen rotation upon mounting on glass slides, tissue shrinkage during fixation, or imperfect slice-to-slice correspondences between histology and MRI. Our experiments showed that RAPSODI can reliably correct for rotations within $pm15^{circ}$ and shrinkage up to 10%. We also evaluated RAPSODI in 89 patients from two institutions that underwent radical prostatectomy, yielding 543 histopathology slices that were registered to corresponding T2 weighted MRI slices. We found a Dice coefficient of 0.98$ pm $0.01 for the prostate, prostate boundary Hausdorff distance of 1.71$ pm $0.48 mm, a urethra deviation of 2.91$ pm $1.25 mm, and a landmark deviation of 2.88$ pm $0.70 mm between registered histopathology images and MRI. Our robust framework successfully mapped the extent of disease from histopathology slices onto MRI.



قيم البحث

اقرأ أيضاً

The interpretation of prostate MRI suffers from low agreement across radiologists due to the subtle differences between cancer and normal tissue. Image registration addresses this issue by accurately mapping the ground-truth cancer labels from surgic al histopathology images onto MRI. Cancer labels achieved by image registration can be used to improve radiologists interpretation of MRI by training deep learning models for early detection of prostate cancer. A major limitation of current automated registration approaches is that they require manual prostate segmentations, which is a time-consuming task, prone to errors. This paper presents a weakly supervised approach for affine and deformable registration of MRI and histopathology images without requiring prostate segmentations. We used manual prostate segmentations and mono-modal synthetic image pairs to train our registration networks to align prostate boundaries and local prostate features. Although prostate segmentations were used during the training of the network, such segmentations were not needed when registering unseen images at inference time. We trained and validated our registration network with 135 and 10 patients from an internal cohort, respectively. We tested the performance of our method using 16 patients from the internal cohort and 22 patients from an external cohort. The results show that our weakly supervised method has achieved significantly higher registration accuracy than a state-of-the-art method run without prostate segmentations. Our deep learning framework will ease the registration of MRI and histopathology images by obviating the need for prostate segmentations.
Magnetic resonance imaging (MRI) is an increasingly important tool for the diagnosis and treatment of prostate cancer. However, interpretation of MRI suffers from high inter-observer variability across radiologists, thereby contributing to missed cli nically significant cancers, overdiagnosed low-risk cancers, and frequent false positives. Interpretation of MRI could be greatly improved by providing radiologists with an answer key that clearly shows cancer locations on MRI. Registration of histopathology images from patients who had radical prostatectomy to pre-operative MRI allows such mapping of ground truth cancer labels onto MRI. However, traditional MRI-histopathology registration approaches are computationally expensive and require careful choices of the cost function and registration hyperparameters. This paper presents ProsRegNet, a deep learning-based pipeline to accelerate and simplify MRI-histopathology image registration in prostate cancer. Our pipeline consists of image preprocessing, estimation of affine and deformable transformations by deep neural networks, and mapping cancer labels from histopathology images onto MRI using estimated transformations. We trained our neural network using MR and histopathology images of 99 patients from our internal cohort (Cohort 1) and evaluated its performance using 53 patients from three different cohorts (an additional 12 from Cohort 1 and 41 from two public cohorts). Results show that our deep learning pipeline has achieved more accurate registration results and is at least 20 times faster than a state-of-the-art registration algorithm. This important advance will provide radiologists with highly accurate prostate MRI answer keys, thereby facilitating improvements in the detection of prostate cancer on MRI. Our code is freely available at https://github.com/pimed//ProsRegNet.
Joint analysis of multiple biomarker images and tissue morphology is important for disease diagnosis, treatment planning and drug development. It requires cross-staining comparison among Whole Slide Images (WSIs) of immuno-histochemical and hematoxyl in and eosin (H&E) microscopic slides. However, automatic, and fast cross-staining alignment of enormous gigapixel WSIs at single-cell precision is challenging. In addition to morphological deformations introduced during slide preparation, there are large variations in cell appearance and tissue morphology across different staining. In this paper, we propose a two-step automatic feature-based cross-staining WSI alignment to assist localization of even tiny metastatic foci in the assessment of lymph node. Image pairs were aligned allowing for translation, rotation, and scaling. The registration was performed automatically by first detecting landmarks in both images, using the scale-invariant image transform (SIFT), followed by the fast sample consensus (FSC) protocol for finding point correspondences and finally aligned the images. The Registration results were evaluated using both visual and quantitative criteria using the Jaccard index. The average Jaccard similarity index of the results produced by the proposed system is 0.942 when compared with the manual registration.
Building robust deep learning-based models requires diverse training data, ideally from several sources. However, these datasets cannot be combined easily because of patient privacy concerns or regulatory hurdles, especially if medical data is involv ed. Federated learning (FL) is a way to train machine learning models without the need for centralized datasets. Each FL client trains on their local data while only sharing model parameters with a global server that aggregates the parameters from all clients. At the same time, each clients data can exhibit differences and inconsistencies due to the local variation in the patient population, imaging equipment, and acquisition protocols. Hence, the federated learned models should be able to adapt to the local particularities of a clients data. In this work, we combine FL with an AutoML technique based on local neural architecture search by training a supernet. Furthermore, we propose an adaptation scheme to allow for personalized model architectures at each FL clients site. The proposed method is evaluated on four different datasets from 3D prostate MRI and shown to improve the local models performance after adaptation through selecting an optimal path through the AutoML supernet.
Prostate cancer is the most prevalent cancer among men in Western countries, with 1.1 million new diagnoses every year. The gold standard for the diagnosis of prostate cancer is a pathologists evaluation of prostate tissue. To potentially assist pa thologists deep-learning-based cancer detection systems have been developed. Many of the state-of-the-art models are patch-based convolutional neural networks, as the use of entire scanned slides is hampered by memory limitations on accelerator cards. Patch-based systems typically require detailed, pixel-level annotations for effective training. However, such annotations are seldom readily available, in contrast to the clinical reports of pathologists, which contain slide-level labels. As such, developing algorithms which do not require manual pixel-wise annotations, but can learn using only the clinical report would be a significant advancement for the field. In this paper, we propose to use a streaming implementation of convolutional layers, to train a modern CNN (ResNet-34) with 21 million parameters end-to-end on 4712 prostate biopsies. The method enables the use of entire biopsy images at high-resolution directly by reducing the GPU memory requirements by 2.4 TB. We show that modern CNNs, trained using our streaming approach, can extract meaningful features from high-resolution images without additional heuristics, reaching similar performance as state-of-the-art patch-based and multiple-instance learning methods. By circumventing the need for manual annotations, this approach can function as a blueprint for other tasks in histopathological diagnosis. The source code to reproduce the streaming models is available at https://github.com/DIAGNijmegen/pathology-streaming-pipeline .
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا