ترغب بنشر مسار تعليمي؟ اضغط هنا

Exploring linearity of deep neural network trained QSM: QSMnet+

51   0   0.0 ( 0 )
 نشر من قبل Woojin Jung
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, deep neural network-powered quantitative susceptibility mapping (QSM), QSMnet, successfully performed ill conditioned dipole inversion in QSM and generated high-quality susceptibility maps. In this paper, the network, which was trained by healthy volunteer data, is evaluated for hemorrhagic lesions that have substantially higher susceptibility than healthy tissues in order to test linearity of QSMnet for susceptibility. The results show that QSMnet underestimates susceptibility in hemorrhagic lesions, revealing degraded linearity of the network for the untrained susceptibility range. To overcome this limitation, a data augmentation method is proposed to generalize the network for a wider range of susceptibility. The newly trained network, which is referred to as QSMnet+, is assessed in computer-simulated lesions with an extended susceptibility range (-1.4 ppm to +1.4 ppm) and also in twelve hemorrhagic patients. The simulation results demonstrate improved linearity of QSMnet+ over QSMnet (root mean square error of QSMnet+: 0.04 ppm vs. QSMnet: 0.36 ppm). When applied to patient data, QSMnet+ maps show less noticeable artifacts to those of conventional QSM maps. Moreover, the susceptibility values of QSMnet+ in hemorrhagic lesions are better matched to those of the conventional QSM method than those of QSMnet when analyzed using linear regression (QSMnet+: slope = 1.05, intercept = -0.03, R2 = 0.93; QSMnet: slope = 0.68, intercept = 0.06, R2 = 0.86), consolidating improved linearity in QSMnet+. This study demonstrates the importance of the trained data range in deep neural network-powered parametric mapping and suggests the data augmentation approach for generalization of network. The new network can be applicable for a wide range of susceptibility quantification.



قيم البحث

اقرأ أيضاً

Deep neural networks have demonstrated promising potential for the field of medical image reconstruction. In this work, an MRI reconstruction algorithm, which is referred to as quantitative susceptibility mapping (QSM), has been developed using a dee p neural network in order to perform dipole deconvolution, which restores magnetic susceptibility source from an MRI field map. Previous approaches of QSM require multiple orientation data (e.g. Calculation of Susceptibility through Multiple Orientation Sampling or COSMOS) or regularization terms (e.g. Truncated K-space Division or TKD; Morphology Enabled Dipole Inversion or MEDI) to solve the ill-conditioned deconvolution problem. Unfortunately, they either require long multiple orientation scans or suffer from artifacts. To overcome these shortcomings, a deep neural network, QSMnet, is constructed to generate a high quality susceptibility map from single orientation data. The network has a modified U-net structure and is trained using gold-standard COSMOS QSM maps. 25 datasets from 5 subjects (5 orientation each) were applied for patch-wise training after doubling the data using augmentation. Two additional datasets of 5 orientation data were used for validation and test (one dataset each). The QSMnet maps of the test dataset were compared with those from TKD and MEDI for image quality and consistency in multiple head orientations. Quantitative and qualitative image quality comparisons demonstrate that the QSMnet results have superior image quality to those of TKD or MEDI and have comparable image quality to those of COSMOS. Additionally, QSMnet maps reveal substantially better consistency across the multiple orientations than those from TKD or MEDI. As a preliminary application, the network was tested for two patients. The QSMnet maps showed similar lesion contrasts with those from MEDI, demonstrating potential for future applications.
340 - J. Rivet , A. Taliercio , C. Fang 2020
Digital hologram rendering can be performed by a convolutional neural network, trained with image pairs calculated by numerical wave propagation from sparse generating images. 512-by-512 pixeldigital Gabor magnitude holograms are successfully estimat ed from experimental interferograms by a standard UNet trained with 50,000 synthetic image pairs over 70 epochs.
We customize an end-to-end image compression framework for retina OCT images based on deep convolutional neural networks (CNNs). The customized compression scheme consists of three parts: data Preprocessing, compression CNNs, and reconstruction CNNs. Data preprocessing module reduces the speckle noise of the OCT images and the segments out the region of interest. We added customized skip connections between the compression CNNs and the reconstruction CNNs to reserve the detail information and trained the two nets together with the semantic segmented image patches from data preprocessing module. To train the two networks sensitive to both low frequency information and high frequency information, we adopted an objective function with two parts: A PatchGAN discriminator to judge the high frequency information and a differentiable MS-SSIM penalty to evaluate the low frequency information. The proposed framework was trained and evaluated on a publicly available OCT dataset. The evaluation showed above 99% similarity in terms of multi-scale structural similarity (MS-SSIM) when the compression ratio is as high as 40. Furthermore, the reconstructed images of compression ratio 80 from the proposed framework even have better quality than that of compression ratio 20 from JPEG by visual comparison. The testing result outperforms JPEG in term of both of MS-SSIM and visualization, which is more obvious as the increase of compression ratio. Our preliminary result indicates the huge potential of deep neural networks on customized medical image compression.
Deep neural networks have emerged as effective tools for computational imaging including quantitative phase microscopy of transparent samples. To reconstruct phase from intensity, current approaches rely on supervised learning with training examples; consequently, their performance is sensitive to a match of training and imaging settings. Here we propose a new approach to phase microscopy by using an untrained deep neural network for measurement formation, encapsulating the image prior and imaging physics. Our approach does not require any training data and simultaneously reconstructs the sought phase and pupil-plane aberrations by fitting the weights of the network to the captured images. To demonstrate experimentally, we reconstruct quantitative phase from through-focus images blindly (i.e. no explicit knowledge of the aberrations).
This paper proposes a particle volume reconstruction directly from an in-line hologram using a deep neural network. Digital holographic volume reconstruction conventionally uses multiple diffraction calculations to obtain sectional reconstructed imag es from an in-line hologram, followed by detection of the lateral and axial positions, and the sizes of particles by using focus metrics. However, the axial resolution is limited by the numerical aperture of the optical system, and the processes are time-consuming. The method proposed here can simultaneously detect the lateral and axial positions, and the particle sizes via a deep neural network (DNN). We numerically investigated the performance of the DNN in terms of the errors in the detected positions and sizes. The calculation time is faster than conventional diffracted-based approaches.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا