Do you want to publish a course? Click here

Using prior information to identify boundaries in disease risk maps

95   0   0.0 ( 0 )
 Added by Duncan Lee
 Publication date 2012
and research's language is English
 Authors Duncan Lee




Ask ChatGPT about the research

Disease maps display the spatial pattern in disease risk, so that high-risk clusters can be identified. The spatial structure in the risk map is typically represented by a set of random effects, which are modelled with a conditional autoregressive (CAR) prior. Such priors include a global spatial smoothing parameter, whereas real risk surfaces are likely to include areas of smooth evolution as well as discontinuities, the latter of which are known as risk boundaries. Therefore, this paper proposes an extension to the class of CAR priors, which can identify both areas of localised spatial smoothness and risk boundaries. However, allowing for this localised smoothing requires large numbers of correlation parameters to be estimated, which are unlikely to be well identified from the data. To address this problem we propose eliciting an informative prior about the locations of such boundaries, which can be combined with the information from the data to provide more precise posterior inference. We test our approach by simulation, before applying it to a study of the risk of emergency admission to hospital in Greater Glasgow, Scotland.

rate research

Read More

With the advent of continuous health monitoring via wearable devices, users now generate their unique streams of continuous data such as minute-level physical activity or heart rate. Aggregating these streams into scalar summaries ignores the distributional nature of data and often leads to the loss of critical information. We propose to capture the distributional properties of wearable data via user-specific quantile functions that are further used in functional regression and multi-modal distributional modelling. In addition, we propose to encode user-specific distributional information with user-specific L-moments, robust rank-based analogs of traditional moments. Importantly, this L-moment encoding results in mutually consistent functional and distributional interpretation of the results of scalar-on-function regression. We also demonstrate how L-moments can be flexibly employed for analyzing joint and individual sources of variation in multi-modal distributional data. The proposed methods are illustrated in a study of association of accelerometry-derived digital gait biomarkers with Alzheimers disease (AD) and in people with normal cognitive function. Our analysis shows that the proposed quantile-based representation results in a much higher predictive performance compared to simple distributional summaries and attains much stronger associations with clinical cognitive scales.
We consider a linear regression model with regression parameter beta=(beta_1,...,beta_p) and independent and identically N(0,sigma^2) distributed errors. Suppose that the parameter of interest is theta = a^T beta where a is a specified vector. Define the parameter tau=c^T beta-t where the vector c and the number t are specified and a and c are linearly independent. Also suppose that we have uncertain prior information that tau = 0. We present a new frequentist 1-alpha confidence interval for theta that utilizes this prior information. We require this confidence interval to (a) have endpoints that are continuous functions of the data and (b) coincide with the standard 1-alpha confidence interval when the data strongly contradicts this prior information. This interval is optimal in the sense that it has minimum weighted average expected length where the largest weight is given to this expected length when tau=0. This minimization leads to an interval that has the following desirable properties. This interval has expected length that (a) is relatively small when the prior information about tau is correct and (b) has a maximum value that is not too large. The following problem will be used to illustrate the application of this new confidence interval. Consider a 2-by 2 factorial experiment with 20 replicates. Suppose that the parameter of interest theta is a specified simple effect and that we have uncertain prior information that the two-factor interaction is zero. Our aim is to find a frequentist 0.95 confidence interval for theta that utilizes this prior information.
We propose Dirichlet Process Mixture (DPM) models for prediction and cluster-wise variable selection, based on two choices of shrinkage baseline prior distributions for the linear regression coefficients, namely the Horseshoe prior and Normal-Gamma prior. We show in a simulation study that each of the two proposed DPM models tend to outperform the standard DPM model based on the non-shrinkage normal prior, in terms of predictive, variable selection, and clustering accuracy. This is especially true for the Horseshoe model, and when the number of covariates exceeds the within-cluster sample size. A real data set is analyzed to illustrate the proposed modeling methodology, where both proposed DPM models again attained better predictive accuracy.
We propose a framework for Bayesian non-parametric estimation of the rate at which new infections occur assuming that the epidemic is partially observed. The developed methodology relies on modelling the rate at which new infections occur as a function which only depends on time. Two different types of prior distributions are proposed namely using step-functions and B-splines. The methodology is illustrated using both simulated and real datasets and we show that certain aspects of the epidemic such as seasonality and super-spreading events are picked up without having to explicitly incorporate them into a parametric model.
Predicting risks of chronic diseases has become increasingly important in clinical practice. When a prediction model is developed in a given source cohort, there is often a great interest to apply the model to other cohorts. However, due to potential discrepancy in baseline disease incidences between different cohorts and shifts in patient composition, the risk predicted by the original model often under- or over-estimates the risk in the new cohort. The remedy of such a poorly calibrated prediction is needed for proper medical decision-making. In this article, we assume the relative risks of predictors are the same between the two cohorts, and propose a novel weighted estimating equation approach to re-calibrating the projected risk for the targeted population through updating the baseline risk. The recalibration leverages the knowledge about the overall survival probabilities for the disease of interest and competing events, and the summary information of risk factors from the targeted population. The proposed re-calibrated risk estimators gain efficiency if the risk factor distributions are the same for both the source and target cohorts, and are robust with little bias if they differ. We establish the consistency and asymptotic normality of the proposed estimators. Extensive simulation studies demonstrate that the proposed estimators perform very well in terms of robustness and efficiency in finite samples. A real data application to colorectal cancer risk prediction also illustrates that the proposed method can be used in practice for model recalibration.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا