ترغب بنشر مسار تعليمي؟ اضغط هنا

eRAKI: Fast Robust Artificial neural networks for K-space Interpolation (RAKI) with Coil Combination and Joint Reconstruction

54   0   0.0 ( 0 )
 نشر من قبل Henry Yu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

RAKI can perform database-free MRI reconstruction by training models using only auto-calibration signal (ACS) from each specific scan. As it trains a separate model for each individual coil, learning and inference with RAKI can be computationally prohibitive, particularly for large 3D datasets. In this abstract, we accelerate RAKI more than 200 times by directly learning a coil-combined target and further improve the reconstruction performance using joint reconstruction across multiple echoes together with an elliptical-CAIPI sampling approach. We further deploy these improvements in quantitative imaging and rapidly obtain T2 and T2* parameter maps from a fast EPTI scan.



قيم البحث

اقرأ أيضاً

Purpose: Spatio-temporal encoding (SPEN) experiments can deliver single-scan MR images without folding complications and with robustness to chemical shift and susceptibility artifacts. It is here shown that further resolution improvements can arise b y relying on multiple receivers, to interpolate the sampled data along the low-bandwidth dimension. The ensuing multiple-sensor interpolation is akin to recently introduced SPEN interleaving procedures, albeit without requiring multiple shots. Methods: By casting SPENs spatial rasterization in k-space, it becomes evident that local k-data interpolations enabled by multiple receivers are akin to real-space interleaving of SPEN images. The practical implementation of such resolution-enhancing procedure becomes similar to those normally used in SMASH or SENSE, yet relaxing these methods fold-over constraints. Results: Experiments validating the theoretical expectations were carried out on phantoms and human volunteers on a 3T scanner. The experiments showed the expected resolution enhancement, at no cost in the sequences complexity. With the addition of multibanding and stimulated echo procedures, 48-slices full-brain coverage could be recorded free from distortions at sub-mm resolution, in 3 sec. Conclusion: Super-resolved SPEN with SENSE (SUSPENSE) achieves the goals of multi-shot SPEN interleaving within one single scan, delivering single-shot sub-mm in-plane resolutions in scanners equipped with suitable multiple sensors.
Reconstruction of PET images is an ill-posed inverse problem and often requires iterative algorithms to achieve good image quality for reliable clinical use in practice, at huge computational costs. In this paper, we consider the PET reconstruction a dense prediction problem where the large scale contextual information is essential, and propose a novel architecture of multi-scale fully convolutional neural networks (msfCNN) for fast PET image reconstruction. The proposed msfCNN gains large receptive fields with both memory and computational efficiency, by using a downscaling-upscaling structure and dilated convolutions. Instead of pooling and deconvolution, we propose to use the periodic shuffling operation from sub-pixel convolution and its inverse to scale the size of feature maps without losing resolution. Residual connections were added to improve training. We trained the proposed msfCNN model with simulated data, and applied it to clinical PET data acquired on a Siemens mMR scanner. The results from real oncological and neurodegenerative cases show that the proposed msfCNN-based reconstruction outperforms the iterative approaches in terms of computational time while achieving comparable image quality for quantification. The proposed msfCNN model can be applied to other dense prediction tasks, and fast msfCNN-based PET reconstruction could facilitate the potential use of molecular imaging in interventional/surgical procedures, where cancer surgery can particularly benefit.
Sparse voxel-based 3D convolutional neural networks (CNNs) are widely used for various 3D vision tasks. Sparse voxel-based 3D CNNs create sparse non-empty voxels from the 3D input and perform 3D convolution operations on them only. We propose a simpl e yet effective padding scheme --- interpolation-aware padding to pad a few empty voxels adjacent to the non-empty voxels and involve them in the 3D CNN computation so that all neighboring voxels exist when computing point-wise features via the trilinear interpolation. For fine-grained 3D vision tasks where point-wise features are essential, like semantic segmentation and 3D detection, our network achieves higher prediction accuracy than the existing networks using the nearest neighbor interpolation or the normalized trilinear interpolation with the zero-padding or the octree-padding scheme. Through extensive comparisons on various 3D segmentation and detection tasks, we demonstrate the superiority of 3D sparse CNNs with our padding scheme in conjunction with feature interpolation.
We introduce a general method for optimizing real-space renormalization-group transformations to study the critical properties of a classical system. The scheme is based on minimizing the Kullback-Leibler divergence between the distribution of the sy stem and the normalized normalizing factor of the transformation parametrized by a restricted Boltzmann machine. We compute the thermal critical exponent of the two-dimensional Ising model using the trained optimal projector and obtain a very accurate thermal critical exponent $y_t=1.0001(11)$ after the first step of the transformation.
We investigate pruning and quantization for deep neural networks. Our goal is to achieve extremely high sparsity for quantized networks to enable implementation on low cost and low power accelerator hardware. In a practical scenario, there are partic ularly many applications for dense prediction tasks, hence we choose stereo depth estimation as target. We propose a two stage pruning and quantization pipeline and introduce a Taylor Score alongside a new fine-tuning mode to achieve extreme sparsity without sacrificing performance. Our evaluation does not only show that pruning and quantization should be investigated jointly, but also shows that almost 99% of memory demand can be cut while hardware costs can be reduced up to 99.9%. In addition, to compare with other works, we demonstrate that our pruning stage alone beats the state-of-the-art when applied to ResNet on CIFAR10 and ImageNet.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا