ترغب بنشر مسار تعليمي؟ اضغط هنا

Model-based Personalized Synthetic MR Imaging

95   0   0.0 ( 0 )
 نشر من قبل Ranjan Maitra
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Synthetic Magnetic Resonance (MR) imaging predicts images at new design parameter settings from a few observed MR scans. Model-based methods, that use both the physical and statistical properties underlying the MR signal and its acquisition, can predict images at any setting from as few as three scans, allowing it to be used in individualized patient- and anatomy-specific contexts. However, the estimation problem in model-based synthetic MR imaging is ill-posed and so regularization, in the form of correlated Gaussian Markov Random Fields, is imposed on the voxel-wise spin-lattice relaxation time, spin-spin relaxation time and the proton density underlying the MR image. We develop theoretically sound but computationally practical matrix-free estimation methods for synthetic MR imaging. Our evaluations demonstrate excellent ability of our methods to synthetize MR images in a clinical framework and also estimation and prediction accuracy and consistency. An added strength of our model-based approach, also developed and illustrated here, is the accurate estimation of standard errors of regional means in the synthesized images.



قيم البحث

اقرأ أيضاً

Functional Magnetic Resonance Imaging (fMRI) maps cerebral activation in response to stimuli but this activation is often difficult to detect, especially in low-signal contexts and single-subject studies. Accurate activation detection can be guided b y the fact that very few voxels are, in reality, truly activated and that activated voxels are spatially localized, but it is challenging to incorporate both these facts. We provide a computationally feasible and methodologically sound model-based approach, implemented in the R package MixfMRI, that bounds the a priori expected proportion of activated voxels while also incorporating spatial context. Results on simulation experiments for different levels of activation detection difficulty are uniformly encouraging. The value of the methodology in low-signal and single-subject fMRI studies is illustrated on a sports imagination experiment. Concurrently, we also extend the potential use of fMRI as a clinical tool to, for example, detect awareness and improve treatment in individual patients in persistent vegetative state, such as traumatic brain injury survivors.
In a number of cases, the Quantile Gaussian Process (QGP) has proven effective in emulating stochastic, univariate computer model output (Plumlee and Tuo, 2014). In this paper, we develop an approach that uses this emulation approach within a Bayesia n model calibration framework to calibrate an agent-based model of an epidemic. In addition, this approach is extended to handle the multivariate nature of the model output, which gives a time series of the count of infected individuals. The basic modeling approach is adapted from Higdon et al. (2008), using a basis representation to capture the multivariate model output. The approach is motivated with an example taken from the 2015 Ebola Challenge workshop which simulated an ebola epidemic to evaluate methodology.
The analysis of rank ordered data has a long history in the statistical literature across a diverse range of applications. In this paper we consider the Extended Plackett-Luce model that induces a flexible (discrete) distribution over permutations. T he parameter space of this distribution is a combination of potentially high-dimensional discrete and continuous components and this presents challenges for parameter interpretability and also posterior computation. Particular emphasis is placed on the interpretation of the parameters in terms of observable quantities and we propose a general framework for preserving the mode of the prior predictive distribution. Posterior sampling is achieved using an effective simulation based approach that does not require imposing restrictions on the parameter space. Working in the Bayesian framework permits a natural representation of the posterior predictive distribution and we draw on this distribution to address the rank aggregation problem and also to identify potential lack of model fit. The flexibility of the Extended Plackett-Luce model along with the effectiveness of the proposed sampling scheme are demonstrated using several simulation studies and real data examples.
Identifying the most deprived regions of any country or city is key if policy makers are to design successful interventions. However, locating areas with the greatest need is often surprisingly challenging in developing countries. Due to the logistic al challenges of traditional household surveying, official statistics can be slow to be updated; estimates that exist can be coarse, a consequence of prohibitive costs and poor infrastructures; and mass urbanisation can render manually surveyed figures rapidly out-of-date. Comparative judgement models, such as the Bradley--Terry model, offer a promising solution. Leveraging local knowledge, elicited via comparisons of different areas affluence, such models can both simplify logistics and circumvent biases inherent to house-hold surveys. Yet widespread adoption remains limited, due to the large amount of data existing approaches still require. We address this via development of a novel Bayesian Spatial Bradley--Terry model, which substantially decreases the amount of data comparisons required for effective inference. This model integrates a network representation of the city or country, along with assumptions of spatial smoothness that allow deprivation in one area to be informed by neighbouring areas. We demonstrate the practical effectiveness of this method, through a novel comparative judgement data set collected in Dar es Salaam, Tanzania.
In many real-life scenarios, system failure depends on dynamic stress-strength interference, where strength degrades and stress accumulates concurrently over time. In this paper, we consider the problem of finding an optimal replacement strategy that balances the cost of replacement with the cost of failure and results in a minimum expected cost per unit time under cumulative damage model with strength degradation. The existing recommendations are applicable only under restricted distributional assumptions and/or with fixed strength. As theoretical evaluation of the expected cost per unit time turns out to be very complicated, a simulation-based algorithm is proposed to evaluate the expected cost rate and find the optimal replacement strategy. The proposed method is easy to implement having wider domain of application. For illustration, the proposed method is applied to real case studies on mailbox and cell-phone battery experiments.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا