ترغب بنشر مسار تعليمي؟ اضغط هنا

A CURE for noisy magnetic resonance images: Chi-square unbiased risk estimation

186   0   0.0 ( 0 )
 نشر من قبل Patrick J. Wolfe
 تاريخ النشر 2011
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and another in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon state-of-the-art methods for both simulated and actual magnetic resonance image data.



قيم البحث

اقرأ أيضاً

150 - Lauri Jetsu 2020
Unambiguous detection of signals superimposed on unknown trends is difficult for unevenly spaced data. Here, we formulate the Discrete Chi-square Method (DCM) that can determine the best model for many signals superimposed on arbitrary polynomial tre nds. DCM minimizes the Chi-square for the data in the multi-dimensional tested frequency space. The required number of tested frequency combinations remains manageable, because the method test statistic is symmetric in this tested frequency space. With our known tested constant frequency grid values, the non-linear DCM model becomes linear, and all results become unambiguous. We test DCM with simulated data containing different mixtures of signals and trends. DCM gives unambiguous results, if the signal frequencies are not too close to each other, and none of the signals is too weak. It relies on brute computational force, because all possible free parameter combinations for all reasonable linear models are tested. DCM works like winning a lottery by buying all lottery tickets. Anyone can reproduce all our results with the DCM computer code. All files, variables and other program code related items are printed in magenta colour. Our Appendix gives detailed instructions for using dcm.py. We also present one preliminary real use case, where DCM is applied to the observed (O) minus the computed (C) eclipse epochs of a binary star, XZ And. This DCM analysis reveals evidence for the possible presence of a third and a fourth body in this system. One recent study of a very large sample of binary stars indicated that the probability for detecting a fourth body from the O-C data of eclipsing binaries is only about 0.00005.
Spatial prediction of weather-elements like temperature, precipitation, and barometric pressure are generally based on satellite imagery or data collected at ground-stations. None of these data provide information at a more granular or hyper-local re solution. On the other hand, crowdsourced weather data, which are captured by sensors installed on mobile devices and gathered by weather-related mobile apps like WeatherSignal and AccuWeather, can serve as potential data sources for analyzing environmental processes at a hyper-local resolution. However, due to the low quality of the sensors and the non-laboratory environment, the quality of the observations in crowdsourced data is compromised. This paper describes methods to improve hyper-local spatial prediction using this varying-quality noisy crowdsourced information. We introduce a reliability metric, namely Veracity Score (VS), to assess the quality of the crowdsourced observations using a coarser, but high-quality, reference data. A VS-based methodology to analyze noisy spatial data is proposed and evaluated through extensive simulations. The merits of the proposed approach are illustrated through case studies analyzing crowdsourced daily average ambient temperature readings for one day in the contiguous United States.
Recent evidence has shown that structural magnetic resonance imaging (MRI) is an effective tool for Alzheimers disease (AD) prediction and diagnosis. While traditional MRI-based diagnosis uses images acquired at a single time point, a longitudinal st udy is more sensitive and accurate in detecting early pathological changes of the AD. Two main difficulties arise in longitudinal MRI-based diagnosis: (1) the inconsistent longitudinal scans among subjects (i.e., different scanning time and different total number of scans); (2) the heterogeneous progressions of high-dimensional regions of interest (ROIs) in MRI. In this work, we propose a novel feature selection and estimation method which can be applied to extract features from the heterogeneous longitudinal MRI. A key ingredient of our method is the combination of smoothing splines and the $l_1$-penalty. We perform experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) database. The results corroborate the advantages of the proposed method for AD prediction in longitudinal studies.
232 - A. C. Davison , N. Sartori 2011
Particle physics experiments such as those run in the Large Hadron Collider result in huge quantities of data, which are boiled down to a few numbers from which it is hoped that a signal will be detected. We discuss a simple probability model for thi s and derive frequentist and noninformative Bayesian procedures for inference about the signal. Both are highly accurate in realistic cases, with the frequentist procedure having the edge for interval estimation, and the Bayesian procedure yielding slightly better point estimates. We also argue that the significance, or $p$-value, function based on the modified likelihood root provides a comprehensive presentation of the information in the data and should be used for inference.
Under-representation of certain populations, based on gender, race/ethnicity, and age, in data collection for predictive modeling may yield less-accurate predictions for the under-represented groups. Recently, this issue of fairness in predictions ha s attracted significant attention, as data-driven models are increasingly utilized to perform crucial decision-making tasks. Methods to achieve fairness in the machine learning literature typically build a single prediction model subject to some fairness criteria in a manner that encourages fair prediction performances for all groups. These approaches have two major limitations: i) fairness is often achieved by compromising accuracy for some groups; ii) the underlying relationship between dependent and independent variables may not be the same across groups. We propose a Joint Fairness Model (JFM) approach for binary outcomes that estimates group-specific classifiers using a joint modeling objective function that incorporates fairness criteria for prediction. We introduce an Accelerated Smoothing Proximal Gradient Algorithm to solve the convex objective function, and demonstrate the properties of the proposed JFM estimates. Next, we presented the key asymptotic properties for the JFM parameter estimates. We examined the efficacy of the JFM approach in achieving prediction performances and parities, in comparison with the Single Fairness Model, group-separate model, and group-ignorant model through extensive simulations. Finally, we demonstrated the utility of the JFM method in the motivating example to obtain fair risk predictions for under-represented older patients diagnosed with coronavirus disease 2019 (COVID-19).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا