ترغب بنشر مسار تعليمي؟ اضغط هنا

A Practical Model-based Segmentation Approach for Accurate Activation Detection in Single-Subject functional Magnetic Resonance Imaging Studies

149   0   0.0 ( 0 )
 نشر من قبل Ranjan Maitra
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Functional Magnetic Resonance Imaging (fMRI) maps cerebral activation in response to stimuli but this activation is often difficult to detect, especially in low-signal contexts and single-subject studies. Accurate activation detection can be guided by the fact that very few voxels are, in reality, truly activated and that activated voxels are spatially localized, but it is challenging to incorporate both these facts. We provide a computationally feasible and methodologically sound model-based approach, implemented in the R package MixfMRI, that bounds the a priori expected proportion of activated voxels while also incorporating spatial context. Results on simulation experiments for different levels of activation detection difficulty are uniformly encouraging. The value of the methodology in low-signal and single-subject fMRI studies is illustrated on a sports imagination experiment. Concurrently, we also extend the potential use of fMRI as a clinical tool to, for example, detect awareness and improve treatment in individual patients in persistent vegetative state, such as traumatic brain injury survivors.



قيم البحث

اقرأ أيضاً

Synthetic Magnetic Resonance (MR) imaging predicts images at new design parameter settings from a few observed MR scans. Model-based methods, that use both the physical and statistical properties underlying the MR signal and its acquisition, can pred ict images at any setting from as few as three scans, allowing it to be used in individualized patient- and anatomy-specific contexts. However, the estimation problem in model-based synthetic MR imaging is ill-posed and so regularization, in the form of correlated Gaussian Markov Random Fields, is imposed on the voxel-wise spin-lattice relaxation time, spin-spin relaxation time and the proton density underlying the MR image. We develop theoretically sound but computationally practical matrix-free estimation methods for synthetic MR imaging. Our evaluations demonstrate excellent ability of our methods to synthetize MR images in a clinical framework and also estimation and prediction accuracy and consistency. An added strength of our model-based approach, also developed and illustrated here, is the accurate estimation of standard errors of regional means in the synthesized images.
Detection of interactions between treatment effects and patient descriptors in clinical trials is critical for optimizing the drug development process. The increasing volume of data accumulated in clinical trials provides a unique opportunity to disc over new biomarkers and further the goal of personalized medicine, but it also requires innovative robust biomarker detection methods capable of detecting non-linear, and sometimes weak, signals. We propose a set of novel univariate statistical tests, based on the theory of random walks, which are able to capture non-linear and non-monotonic covariate-treatment interactions. We also propose a novel combined test, which leverages the power of all of our proposed univariate tests into a single general-case tool. We present results for both synthetic trials as well as real-world clinical trials, where we compare our method with state-of-the-art techniques and demonstrate the utility and robustness of our approach.
Identifying the most deprived regions of any country or city is key if policy makers are to design successful interventions. However, locating areas with the greatest need is often surprisingly challenging in developing countries. Due to the logistic al challenges of traditional household surveying, official statistics can be slow to be updated; estimates that exist can be coarse, a consequence of prohibitive costs and poor infrastructures; and mass urbanisation can render manually surveyed figures rapidly out-of-date. Comparative judgement models, such as the Bradley--Terry model, offer a promising solution. Leveraging local knowledge, elicited via comparisons of different areas affluence, such models can both simplify logistics and circumvent biases inherent to house-hold surveys. Yet widespread adoption remains limited, due to the large amount of data existing approaches still require. We address this via development of a novel Bayesian Spatial Bradley--Terry model, which substantially decreases the amount of data comparisons required for effective inference. This model integrates a network representation of the city or country, along with assumptions of spatial smoothness that allow deprivation in one area to be informed by neighbouring areas. We demonstrate the practical effectiveness of this method, through a novel comparative judgement data set collected in Dar es Salaam, Tanzania.
An important challenge in statistical analysis lies in controlling the bias of estimators due to the ever-increasing data size and model complexity. Approximate numerical methods and data features like censoring and misclassification often result in analytical and/or computational challenges when implementing standard estimators. As a consequence, consistent estimators may be difficult to obtain, especially in complex and/or high dimensional settings. In this paper, we study the properties of a general simulation-based estimation framework that allows to construct bias corrected consistent estimators. We show that the considered approach leads, under more general conditions, to stronger bias correction properties compared to alternative methods. Besides its bias correction advantages, the considered method can be used as a simple strategy to construct consistent estimators in settings where alternative methods may be challenging to apply. Moreover, the considered framework can be easily implemented and is computationally efficient. These theoretical results are highlighted with simulation studies of various commonly used models, including the negative binomial regression (with and without censoring) and the logistic regression (with and without misclassification errors). Additional numerical illustrations are provided in the supplementary materials.
In a number of cases, the Quantile Gaussian Process (QGP) has proven effective in emulating stochastic, univariate computer model output (Plumlee and Tuo, 2014). In this paper, we develop an approach that uses this emulation approach within a Bayesia n model calibration framework to calibrate an agent-based model of an epidemic. In addition, this approach is extended to handle the multivariate nature of the model output, which gives a time series of the count of infected individuals. The basic modeling approach is adapted from Higdon et al. (2008), using a basis representation to capture the multivariate model output. The approach is motivated with an example taken from the 2015 Ebola Challenge workshop which simulated an ebola epidemic to evaluate methodology.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا