Do you want to publish a course? Click here

Motion Compensated Whole-Heart Coronary Magnetic Resonance Angiography using Focused Navigation (fNAV)

74   0   0.0 ( 0 )
 Added by Christopher Roy
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Background: RSN whole-heart CMRA is a technique that estimates and corrects for respiratory motion. However, RSN has been limited to a 1D rigid correction which is often insufficient for patients with complex respiratory patterns. The goal of this work is therefore to improve the robustness and quality of 3D radial CMRA by incorporating both 3D motion information and nonrigid intra-acquisition correction of the data into a framework called focused navigation (fNAV). Methods: We applied fNAV to 500 data sets from a numerical simulation, 22 healthy volunteers, and 549 cardiac patients. We compared fNAV to RSN and respiratory resolved XD-GRASP reconstructions of the same data and recorded reconstruction times. Motion accuracy was measured as the correlation between fNAV and ground truth for simulations, and fNAV and image registration for in vivo data. Vessel sharpness was measured using Soap-Bubble. Finally, image quality analysis was performed by a blinded expert reviewer who chose the best image for each data set. Results The reconstruction time for fNAV images was significantly higher than RSN (6.1 +/- 2.1 minutes vs 1.4 +/- 0.3, minutes, p<0.025) but significantly lower than XD-GRASP (25.6 +/- 7.1, minutes, p<0.025). There is high correlation between the fNAV, and reference displacement estimates across all data sets (0.73 +/- 0.29). For all data, fNAV lead to significantly sharper vessels than all other reconstructions (p < 0.01). Finally, a blinded reviewer chose fNAV as the best image in 239 out of 571 cases (p = 10-5). Conclusion: fNAV is a promising technique for improving free-breathing 3D radial whole-heart CMRA. This novel approach to respiratory self-navigation can derive 3D nonrigid motion estimations from an acquired 1D signal yielding statistically significant improvement in image sharpness relative to 1D translational correction as well as XD-GRASP reconstructions.

rate research

Read More

Vessel stenosis is a major risk factor in cardiovascular diseases (CVD). To analyze the degree of vessel stenosis for supporting the treatment management, extraction of coronary artery area from Computed Tomographic Angiography (CTA) is regarded as a key procedure. However, manual segmentation by cardiologists may be a time-consuming task, and present a significant inter-observer variation. Although various computer-aided approaches have been developed to support segmentation of coronary arteries in CTA, the results remain unreliable due to complex attenuation appearance of plaques, which are the cause of the stenosis. To overcome the difficulties caused by attenuation ambiguity, in this paper, a 3D multi-channel U-Net architecture is proposed for fully automatic 3D coronary artery reconstruction from CTA. Other than using the original CTA image, the main idea of the proposed approach is to incorporate the vesselness map into the input of the U-Net, which serves as the reinforcing information to highlight the tubular structure of coronary arteries. The experimental results show that the proposed approach could achieve a Dice Similarity Coefficient (DSC) of 0.8 in comparison to around 0.6 attained by previous CNN approaches.
173 - Sara Ranjbar 2020
Whole brain extraction, also known as skull stripping, is a process in neuroimaging in which non-brain tissue such as skull, eyeballs, skin, etc. are removed from neuroimages. Skull striping is a preliminary step in presurgical planning, cortical reconstruction, and automatic tumor segmentation. Despite a plethora of skull stripping approaches in the literature, few are sufficiently accurate for processing pathology-presenting MRIs, especially MRIs with brain tumors. In this work we propose a deep learning approach for skull striping common MRI sequences in oncology such as T1-weighted with gadolinium contrast (T1Gd) and T2-weighted fluid attenuated inversion recovery (FLAIR) in patients with brain tumors. We automatically created gray matter, white matter, and CSF probability masks using SPM12 software and merged the masks into one for a final whole-brain mask for model training. Dice agreement, sensitivity, and specificity of the model (referred herein as DeepBrain) was tested against manual brain masks. To assess data efficiency, we retrained our models using progressively fewer training data examples and calculated average dice scores on the test set for the models trained in each round. Further, we tested our model against MRI of healthy brains from the LBP40A dataset. Overall, DeepBrain yielded an average dice score of 94.5%, sensitivity of 96.4%, and specificity of 98.5% on brain tumor data. For healthy brains, model performance improved to a dice score of 96.2%, sensitivity of 96.6% and specificity of 99.2%. The data efficiency experiment showed that, for this specific task, comparable levels of accuracy could have been achieved with as few as 50 training samples. In conclusion, this study demonstrated that a deep learning model trained on minimally processed automatically-generated labels can generate more accurate brain masks on MRI of brain tumor patients within seconds.
Purpose: To develop a MRI acquisition and reconstruction framework for volumetric cine visualisation of the fetal heart and great vessels in the presence of maternal and fetal motion. Methods: Four-dimensional depiction was achieved using a highly-accelerated multi-planar real-time balanced steady state free precession acquisition combined with retrospective image-domain techniques for motion correction, cardiac synchronisation and outlier rejection. The framework was evaluated and optimised using a numerical phantom, and evaluated in a study of 20 mid- to late-gestational age human fetal subjects. Reconstructed cine volumes were evaluated by experienced cardiologists and compared with matched ultrasound. A preliminary assessment of flow-sensitive reconstruction using the velocity information encoded in the phase of dynamic images is included. Results: Reconstructed cine volumes could be visualised in any 2D plane without the need for highly-specific scan plane prescription prior to acquisition or for maternal breath hold to minimise motion. Reconstruction was fully automated aside from user-specified masks of the fetal heart and chest. The framework proved robust when applied to fetal data and simulations confirmed that spatial and temporal features could be reliably recovered. Expert evaluation suggested the reconstructed volumes can be used for comprehensive assessment of the fetal heart, either as an adjunct to ultrasound or in combination with other MRI techniques. Conclusion: The proposed methods show promise as a framework for motion-compensated 4D assessment of the fetal heart and great vessels.
Relaxometry studies in preterm and at-term newborns have provided insight into brain microstructure, thus opening new avenues for studying normal brain development and supporting diagnosis in equivocal neurological situations. However, such quantitative techniques require long acquisition times and therefore cannot be straightforwardly translated to in utero brain developmental studies. In clinical fetal brain magnetic resonance imaging routine, 2D low-resolution T2-weighted fast spin echo sequences are used to minimize the effects of unpredictable fetal motion during acquisition. As super-resolution techniques make it possible to reconstruct a 3D high-resolution volume of the fetal brain from clinical low-resolution images, their combination with quantitative acquisition schemes could provide fast and accurate T2 measurements. In this context, the present work demonstrates the feasibility of using super-resolution reconstruction from conventional T2-weighted fast spin echo sequences for 3D isotropic T2 mapping. A quantitative magnetic resonance phantom was imaged using a clinical T2-weighted fast spin echo sequence at variable echo time to allow for super-resolution reconstruction at every echo time and subsequent T2 mapping of samples whose relaxometric properties are close to those of fetal brain tissue. We demonstrate that this approach is highly repeatable, accurate and robust when using six echo times (total acquisition time under 9 minutes) as compared to gold-standard single-echo spin echo sequences (several hours for one single 2D slice).
Eye movements, blinking and other motion during the acquisition of optical coherence tomography (OCT) can lead to artifacts, when processed to OCT angiography (OCTA) images. Affected scans emerge as high intensity (white) or missing (black) regions, resulting in lost information. The aim of this research is to fill these gaps using a deep generative model for OCT to OCTA image translation relying on a single intact OCT scan. Therefore, a U-Net is trained to extract the angiographic information from OCT patches. At inference, a detection algorithm finds outlier OCTA scans based on their surroundings, which are then replaced by the trained network. We show that generative models can augment the missing scans. The augmented volumes could then be used for 3-D segmentation or increase the diagnostic value.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا