ترغب بنشر مسار تعليمي؟ اضغط هنا

ENLIVE: An Efficient Nonlinear Method for Calibrationless and Robust Parallel Imaging

138   0   0.0 ( 0 )
 نشر من قبل Hans Christian Martin Holme
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Robustness against data inconsistencies, imaging artifacts and acquisition speed are crucial factors limiting the possible range of applications for magnetic resonance imaging (MRI). Therefore, we report a novel calibrationless parallel imaging technique which simultaneously estimates coil profiles and image content in a relaxed forward model. Our method is robust against a wide class of data inconsistencies, minimizes imaging artifacts and is comparably fast combining important advantages of many conceptually different state-of-the-art parallel imaging approaches. Depending on the experimental setting, data can be undersampled well below the Nyquist limit. Here, even high acceleration factors yield excellent imaging results while being robust to noise and the occurrence of phase singularities in the image domain, as we show on different data. Moreover, our method successfully reconstructs acquisitions with insufficient field-of-view. We further compare our approach to ESPIRiT and SAKE using spin-echo and gradient echo MRI data from the human head and knee. In addition, we show its applicability to non-Cartesian imaging on radial FLASH cardiac MRI data. Using theoretical considerations, we show that ENLIVE can be related to a low-rank formulation of blind multi-channel deconvolution, explaining why it inherently promotes low-rank solutions.

قيم البحث

اقرأ أيضاً

121 - Martin Uecker 2017
Modern reconstruction methods for magnetic resonance imaging (MRI) exploit the spatially varying sensitivity profiles of receive-coil arrays as additional source of information. This allows to reduce the number of time-consuming Fourier-encoding step s by undersampling. The receive sensitivities are a priori unknown and influenced by geometry and electric properties of the (moving) subject. For optimal results, they need to be estimated jointly with the image from the same undersampled measurement data. Formulated as an inverse problem, this leads to a bilinear reconstruction problem related to multi-channel blind deconvolution. In this work, we will discuss some recently developed approaches for the solution of this problem.
Software engineers often have to estimate the performance of a software system before having full knowledge of the system parameters, such as workload and operational profile. These uncertain parameters inevitably affect the accuracy of quality evalu ations, and the ability to judge if the system can continue to fulfil performance requirements if parameter results are different from expected. Previous work has addressed this problem by modelling the potential values of uncertain parameters as probability distribution functions, and estimating the robustness of the system using Monte Carlo-based methods. These approaches require a large number of samples, which results in high computational cost and long waiting times. To address the computational inefficiency of existing approaches, we employ Polynomial Chaos Expansion (PCE) as a rigorous method for uncertainty propagation and further extend its use to robust performance estimation. The aim is to assess if the software system is robust, i.e., it can withstand possible changes in parameter values, and continue to meet performance requirements. PCE is a very efficient technique, and requires significantly less computations to accurately estimate the distribution of performance indices. Through three very different case studies from different phases of software development and heterogeneous application domains, we show that PCE can accurately (>97%) estimate the robustness of various performance indices, and saves up to 225 hours of performance evaluation time when compared to Monte Carlo Simulation.
Wave-CAIPI MR imaging is a 3D imaging technique which can uniformize the g-factor maps and significantly reduce g-factor penalty at high acceleration factors. But it is time-consuming to calculate the average g-factor penalty for optimizing the param eters of Wave-CAIPI. In this paper, we propose a novel fast calculation method to calculate the average g-factor in Wave-CAIPI imaging. Wherein, the g-factor value in the arbitrary (e.g. the central) position is separately calculated and then approximated to the average g-factor using Taylor linear approximation. The verification experiments have demonstrated that the average g-factors of Wave-CAIPI imaging which are calculated by the proposed method is consistent with the previous time-consuming theoretical calculation method and the conventional pseudo multiple replica method. Comparison experiments show that the proposed method is averagely about 1000 times faster than the previous theoretical calculation method and about 1700 times faster than the conventional pseudo multiple replica method.
An optimal data partitioning in parallel & distributed implementation of clustering algorithms is a necessary computation as it ensures independent task completion, fair distribution, less number of affected points and better & faster merging. Though partitioning using Kd Tree is being conventionally used in academia, it suffers from performance drenches and bias (non equal distribution) as dimensionality of data increases and hence is not suitable for practical use in industry where dimensionality can be of order of 100s to 1000s. To address these issues we propose two new partitioning techniques using existing mathematical models & study their feasibility, performance (bias and partitioning speed) & possible variants in choosing initial seeds. First method uses an n dimensional hashed grid based approach which is based on mapping the points in space to a set of cubes which hashes the points. Second method uses a tree of voronoi planes where each plane corresponds to a partition. We found that grid based approach was computationally impractical, while using a tree of voronoi planes (using scalable K-Means++ initial seeds) drastically outperformed the Kd-tree tree method as dimensionality increased.
197 - Tao Wang , Wenjun Xia , Zexin Lu 2021
Due to the presence of metallic implants, the imaging quality of computed tomography (CT) would be heavily degraded. With the rapid development of deep learning, several network models have been proposed for metal artifact reduction (MAR). Since the dual-domain MAR methods can leverage the hybrid information from both sinogram and image domains, they have significantly improved the performance compared to single-domain methods. However,current dual-domain methods usually operate on both domains in a specific order, which implicitly imposes a certain priority prior into MAR and may ignore the latent information interaction between both domains. To address this problem, in this paper, we propose a novel interactive dualdomain parallel network for CT MAR, dubbed as IDOLNet. Different from existing dual-domain methods, the proposed IDOL-Net is composed of two modules. The disentanglement module is utilized to generate high-quality prior sinogram and image as the complementary inputs. The follow-up refinement module consists of two parallel and interactive branches that simultaneously operate on image and sinogram domain, fully exploiting the latent information interaction between both domains. The simulated and clinical results demonstrate that the proposed IDOL-Net outperforms several state-of-the-art models in both qualitative and quantitative aspects.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا