ترغب بنشر مسار تعليمي؟ اضغط هنا

Histology to 3D In Vivo MR Registration for Volumetric Evaluation of MRgFUS Treatment Assessment Biomarkers

376   0   0.0 ( 0 )
 نشر من قبل Blake Zimmerman
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Advances in imaging and early cancer detection have increased interest in magnetic resonance (MR) guided focused ultrasound (MRgFUS) technologies for cancer treatment. MRgFUS ablation treatments could reduce surgical risks, preserve organ tissue/function, and improve patient quality of life. However, surgical resection and histological analysis remain the gold standard to assess cancer treatment response. For non-invasive ablation therapies such as MRgFUS, the treatment response must be determined through MR imaging biomarkers. However, current MR biomarkers are inconclusive and have not been rigorously evaluated against histology via accurate registration. Existing registration methods rely on anatomical features to directly register in vivo MR and histology. For MRgFUS applications in anatomies such as liver, kidney, or breast, anatomical features independent from treatment features are often insufficient to perform direct registration. We present a novel MR to histology registration workflow that utilizes intermediate imaging and does not rely on these independent features. The presented workflow yields an overall registration accuracy of 1.00 +/- 0.13 mm. The developed registration pipeline is used to evaluate a common MRgFUS treatment assessment biomarker against histology. Evaluating MR biomarkers against histology using this registration pipeline will facilitate validating novel MRgFUS biomarkers to improve treatment assessment without surgical intervention.

قيم البحث

اقرأ أيضاً

Noninvasive MR-guided focused ultrasound (MRgFUS) treatments are promising alternatives to the surgical removal of malignant tumors. A significant challenge is assessing the viability of treated tissue during and immediately after MRgFUS procedures. Current clinical assessment uses the nonperfused volume (NPV) biomarker immediately after treatment from contrast-enhanced MRI. The NPV has variable accuracy, and the use of contrast agent prevents continuing MRgFUS treatment if tumor coverage is inadequate. This work presents a novel, noncontrast, learned multiparametric MR biomarker that can be used during treatment for intratreatment assessment, validated in a VX2 rabbit tumor model. A deep convolutional neural network was trained on noncontrast multiparametric MR images using the NPV biomarker from follow-up MR imaging (3-5 days after MRgFUS treatment) as the accurate label of nonviable tissue. A novel volume-conserving registration algorithm yielded a voxel-wise correlation between treatment and follow-up NPV, providing a rigorous validation of the biomarker. The learned noncontrast multiparametric MR biomarker predicted the follow-up NPV with an average DICE coefficient of 0.71, substantially outperforming the current clinical standard (DICE coefficient = 0.53). Noncontrast multiparametric MR imaging integrated with a deep convolutional neural network provides a more accurate prediction of MRgFUS treatment outcome than current contrast-based techniques.
Joint registration of a stack of 2D histological sections to recover 3D structure (3D histology reconstruction) finds application in areas such as atlas building and validation of in vivo imaging. Straighforward pairwise registration of neighbouring sections yields smooth reconstructions but has well-known problems such as banana effect (straightening of curved structures) and z-shift (drift). While these problems can be alleviated with an external, linearly aligned reference (e.g., Magnetic Resonance images), registration is often inaccurate due to contrast differences and the strong nonlinear distortion of the tissue, including artefacts such as folds and tears. In this paper, we present a probabilistic model of spatial deformation that yields reconstructions for multiple histological stains that that are jointly smooth, robust to outliers, and follow the reference shape. The model relies on a spanning tree of latent transforms connecting all the sections and slices, and assumes that the registration between any pair of images can be see as a noisy version of the composition of (possibly inverted) latent transforms connecting the two images. Bayesian inference is used to compute the most likely latent transforms given a set of pairwise registrations between image pairs within and across modalities. Results on synthetic deformations on multiple MR modalities, show that our method can accurately and robustly register multiple contrasts even in the presence of outliers. The 3D histology reconstruction of two stains (Nissl and parvalbumin) from the Allen human brain atlas, show its benefits on real data with severe distortions. We also provide the correspondence to MNI space, bridging the gap between two of the most used atlases in histology and MRI. Data is available at https://openneuro.org/datasets/ds003590 and code at https://github.com/acasamitjana/3dhirest.
In the last decade, convolutional neural networks (ConvNets) have dominated and achieved state-of-the-art performances in a variety of medical imaging applications. However, the performances of ConvNets are still limited by lacking the understanding of long-range spatial relations in an image. The recently proposed Vision Transformer (ViT) for image classification uses a purely self-attention-based model that learns long-range spatial relations to focus on the relevant parts of an image. Nevertheless, ViT emphasizes the low-resolution features because of the consecutive downsamplings, result in a lack of detailed localization information, making it unsuitable for image registration. Recently, several ViT-based image segmentation methods have been combined with ConvNets to improve the recovery of detailed localization information. Inspired by them, we present ViT-V-Net, which bridges ViT and ConvNet to provide volumetric medical image registration. The experimental results presented here demonstrate that the proposed architecture achieves superior performance to several top-performing registration methods.
Compressed sensing (CS) has been introduced to accelerate data acquisition in MR Imaging. However, CS-MRI methods suffer from detail loss with large acceleration and complicated parameter selection. To address the limitations of existing CS-MRI metho ds, a model-driven MR reconstruction is proposed that trains a deep network, named CP-net, which is derived from the Chambolle-Pock algorithm to reconstruct the in vivo MR images of human brains from highly undersampled complex k-space data acquired on different types of MR scanners. The proposed deep network can learn the proximal operator and parameters among the Chambolle-Pock algorithm. All of the experiments show that the proposed CP-net achieves more accurate MR reconstruction results, outperforming state-of-the-art methods across various quantitative metrics.
In brain tumor surgery, the quality and safety of the procedure can be impacted by intra-operative tissue deformation, called brain shift. Brain shift can move the surgical targets and other vital structures such as blood vessels, thus invalidating t he pre-surgical plan. Intra-operative ultrasound (iUS) is a convenient and cost-effective imaging tool to track brain shift and tumor resection. Accurate image registration techniques that update pre-surgical MRI based on iUS are crucial but challenging. The MICCAI Challenge 2018 for Correction of Brain shift with Intra-Operative UltraSound (CuRIOUS2018) provided a public platform to benchmark MRI-iUS registration algorithms on newly released clinical datasets. In this work, we present the data, setup, evaluation, and results of CuRIOUS 2018, which received 6 fully automated algorithms from leading academic and industrial research groups. All algorithms were first trained with the public RESECT database, and then ranked based on test dataset of 10 additional cases with identical data curation and annotation protocols as the RESECT database. The article compares the results of all participating teams and discusses the insights gained from the challenge, as well as future work.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا