ترغب بنشر مسار تعليمي؟ اضغط هنا

Deriving ventilation imaging from 4DCT by deep convolutional neural network

285   0   0.0 ( 0 )
 نشر من قبل Yuncheng Zhong
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Purpose: Functional imaging is emerging as an important tool for lung cancer treatment planning and evaluation. Compared with traditional methods such as nuclear medicine ventilation-perfusion (VQ), positron emission tomography (PET), single photon emission computer tomography (SPECT), or magnetic resonance imaging (MRI), which use contrast agents to form 2D or 3D functional images, ventilation imaging obtained from 4DCT lung images is convenient and cost-effective because of its availability during radiation treatment planning. Current methods of obtaining ventilation images from 4DCT lung images involve deformable image registration (DIR) and a density (HU) change-based algorithm (DIR/HU); therefore the resulting ventilation images are sensitive to the selection of DIR algorithms. Methods: We propose a deep convolutional neural network (CNN)-based method to derive the ventilation images from 4DCT directly without explicit DIR, thereby improving consistency and accuracy of ventilation images. A total of 82 sets of 4DCT and ventilation images from patients with lung cancer were studied using this method. Results: The predicted images were comparable to the label images of the test data. The similarity index and correlation coefficient averaged over the ten-fold cross validation were 0.883+-0.034 and 0.878+-0.028, respectively. Conclusions: The results demonstrate that deep CNN can generate ventilation imaging from 4DCT without explicit deformable image registration, reducing the associated uncertainty.



قيم البحث

اقرأ أيضاً

Compressed sensing magnetic resonance imaging (CS-MRI) is a theoretical framework that can accurately reconstruct images from undersampled k-space data with a much lower sampling rate than the one set by the classical Nyquist-Shannon sampling theorem . Therefore, CS-MRI can efficiently accelerate acquisition time and relieve the psychological burden on patients while maintaining high imaging quality. The problems with traditional CS-MRI reconstruction are solved by iterative numerical solvers, which usually suffer from expensive computational cost and the lack of accurate handcrafted priori. In this paper, inspired by deep learnings (DLs) fast inference and excellent end-to-end performance, we propose a novel cascaded convolutional neural network called MD-Recon-Net to facilitate fast and accurate MRI reconstruction. Especially, different from existing DL-based methods, which operate on single domain data or both domains in a certain order, our proposed MD-Recon-Net contains two parallel and interactive branches that simultaneously perform on k-space and spatial-domain data, exploring the latent relationship between k-space and the spatial domain. The simulated experimental results show that the proposed method not only achieves competitive visual effects to several state-of-the-art methods, but also outperforms other DL-based methods in terms of model scale and computational cost.
Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. This article does not aim to cover all aspects of the field but focuses on a particu lar topic, image-to-image translation. Although the topic may not sound familiar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform point of view. We introduce core ideas and jargon that are specific to image processing by use of DNNs. Having an intuitive grasp of the core ideas of and a knowledge of technical terms would be of great help to the reader for understanding the existing and future applications. Most of the recent applications which build on image-to-image translation are based on one of two fundamental architectures, called pix2pix and CycleGAN, depending on whether the available training data are paired or unpaired. We provide computer codes which implement these two architectures with various enhancements. Our codes are available online with use of the very permissive MIT license. We provide a hands-on tutorial for training a model for denoising based on our codes. We hope that this article, together with the codes, will provide both an overview and the details of the key algorithms, and that it will serve as a basis for the development of new applications.
The new era of artificial intelligence demands large-scale ultrafast hardware for machine learning. Optical artificial neural networks process classical and quantum information at the speed of light, and are compatible with silicon technology, but la ck scalability and need expensive manufacturing of many computational layers. New paradigms, as reservoir computing and the extreme learning machine, suggest that disordered and biological materials may realize artificial neural networks with thousands of computational nodes trained only at the input and at the readout. Here we employ biological complex systems, i.e., living three-dimensional tumour brain models, and demonstrate a random neural network (RNN) trained to detect tumour morphodynamics via image transmission. The RNN, with the tumour spheroid as a three-dimensional deep computational reservoir, performs programmed optical functions and detects cancer morphodynamics from laser-induced hyperthermia inaccessible by optical imaging. Moreover, the RNN quantifies the effect of chemotherapy inhibiting tumour growth. We realize a non-invasive smart probe for cytotoxicity assay, which is at least one order of magnitude more sensitive with respect to conventional imaging. Our random and hybrid photonic/living system is a novel artificial machine for computing and for the real-time investigation of tumour dynamics.
Fetal cortical plate segmentation is essential in quantitative analysis of fetal brain maturation and cortical folding. Manual segmentation of the cortical plate, or manual refinement of automatic segmentations is tedious and time-consuming. Automati c segmentation of the cortical plate, on the other hand, is challenged by the relatively low resolution of the reconstructed fetal brain MRI scans compared to the thin structure of the cortical plate, partial voluming, and the wide range of variations in the morphology of the cortical plate as the brain matures during gestation. To reduce the burden of manual refinement of segmentations, we have developed a new and powerful deep learning segmentation method. Our method exploits new deep attentive modules with mixed kernel convolutions within a fully convolutional neural network architecture that utilizes deep supervision and residual connections. We evaluated our method quantitatively based on several performance measures and expert evaluations. Results show that our method outperforms several state-of-the-art deep models for segmentation, as well as a state-of-the-art multi-atlas segmentation technique. We achieved average Dice similarity coefficient of 0.87, average Hausdorff distance of 0.96 mm, and average symmetric surface difference of 0.28 mm on reconstructed fetal brain MRI scans of fetuses scanned in the gestational age range of 16 to 39 weeks. With a computation time of less than 1 minute per fetal brain, our method can facilitate and accelerate large-scale studies on normal and altered fetal brain cortical maturation and folding.
Purpose: Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative MRI image analysis in daily clinical practice. In this study, we present a deep learning-based approach for MRI image INU correction. Method: We developed a residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected MRI images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 30 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons were made among the proposed method and other approaches. Result: Our res-cycle GAN based method achieved higher accuracy and better tissue uniformity compared to the other algorithms. Moreover, once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters. Conclusion: In this study, a deep learning based automatic INU correction method in MRI, namely, res-cycle GAN has been investigated. The results show that learning based methods can achieve promising accuracy, while highly speeding up the correction through avoiding the unintuitive parameter tuning process in N4ITK correction.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا