Do you want to publish a course? Click here

A tractable Bayesian joint model for longitudinal and survival data

81   0   0.0 ( 0 )
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We introduce a numerically tractable formulation of Bayesian joint models for longitudinal and survival data. The longitudinal process is modelled using generalised linear mixed models, while the survival process is modelled using a parametric general hazard structure. The two processes are linked by sharing fixed and random effects, separating the effects that play a role at the time scale from those that affect the hazard scale. This strategy allows for the inclusion of non-linear and time-dependent effects while avoiding the need for numerical integration, which facilitates the implementation of the proposed joint model. We explore the use of flexible parametric distributions for modelling the baseline hazard function which can capture the basic shapes of interest in practice. We discuss prior elicitation based on the interpretation of the parameters. We present an extensive simulation study, where we analyse the inferential properties of the proposed models, and illustrate the trade-off between flexibility, sample size, and censoring. We also apply our proposal to two real data applications in order to demonstrate the adaptability of our formulation both in univariate time-to-event data and in a competing risks framework. The methodology is implemented in rstan.



rate research

Read More

In spatial statistics, it is often assumed that the spatial field of interest is stationary and its covariance has a simple parametric form, but these assumptions are not appropriate in many applications. Given replicate observations of a Gaussian spatial field, we propose nonstationary and nonparametric Bayesian inference on the spatial dependence. Instead of estimating the quadratic (in the number of spatial locations) entries of the covariance matrix, the idea is to infer a near-linear number of nonzero entries in a sparse Cholesky factor of the precision matrix. Our prior assumptions are motivated by recent results on the exponential decay of the entries of this Cholesky factor for Matern-type covariances under a specific ordering scheme. Our methods are highly scalable and parallelizable. We conduct numerical comparisons and apply our methodology to climate-model output, enabling statistical emulation of an expensive physical model.
We propose a framework for Bayesian non-parametric estimation of the rate at which new infections occur assuming that the epidemic is partially observed. The developed methodology relies on modelling the rate at which new infections occur as a function which only depends on time. Two different types of prior distributions are proposed namely using step-functions and B-splines. The methodology is illustrated using both simulated and real datasets and we show that certain aspects of the epidemic such as seasonality and super-spreading events are picked up without having to explicitly incorporate them into a parametric model.
We consider the problem of variable selection in high-dimensional settings with missing observations among the covariates. To address this relatively understudied problem, we propose a new synergistic procedure -- adaptive Bayesian SLOPE -- which effectively combines the SLOPE method (sorted $l_1$ regularization) together with the Spike-and-Slab LASSO method. We position our approach within a Bayesian framework which allows for simultaneous variable selection and parameter estimation, despite the missing values. As with the Spike-and-Slab LASSO, the coefficients are regarded as arising from a hierarchical model consisting of two groups: (1) the spike for the inactive and (2) the slab for the active. However, instead of assigning independent spike priors for each covariate, here we deploy a joint SLOPE spike prior which takes into account the ordering of coefficient magnitudes in order to control for false discoveries. Through extensive simulations, we demonstrate satisfactory performance in terms of power, FDR and estimation bias under a wide range of scenarios. Finally, we analyze a real dataset consisting of patients from Paris hospitals who underwent a severe trauma, where we show excellent performance in predicting platelet levels. Our methodology has been implemented in C++ and wrapped into an R package ABSLOPE for public use.
The vast majority of models for the spread of communicable diseases are parametric in nature and involve underlying assumptions about how the disease spreads through a population. In this article we consider the use of Bayesian nonparametric approaches to analysing data from disease outbreaks. Specifically we focus on methods for estimating the infection process in simple models under the assumption that this process has an explicit time-dependence.
Joint modelling of longitudinal and time-to-event data is usually described by a random effect joint model which uses shared or correlated latent effects to capture associations between the two processes. Under this framework, the joint distribution of the two processes can be derived straightforwardly by assuming conditional independence given the latent effects. Alternative approaches to induce interdependency into sub-models have also been considered in the literature and one such approach is using copulas, to introduce non-linear correlation between the marginal distributions of the longitudinal and time-to-event processes. A Gaussian copula joint model has been proposed in the literature to fit joint data by applying a Monte Carlo expectation-maximisation algorithm. Enlightening as it is, its original estimation procedure comes with some limitations. In the original approach, the log-likelihood function can not be derived analytically thus requires a Monte Carlo integration, which not only comes with intensive computation but also introduces extra variation/noise into the estimation. The combination with the EM algorithm slows down the computation further and convergence to the maximum likelihood estimators can not be always guaranteed. In addition, the assumption that the length of planned measurements is uniform and balanced across all subjects is not suitable when subjects have varying number of observations. In this paper, we relax this restriction and propose an exact likelihood estimation approach to replace the more computationally expensive Monte Carlo expectation-maximisation algorithm. We also provide a straightforward way to compute dynamic predictions of survival probabilities, showing that our proposed model is comparable in prediction performance to the shared random effects joint model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا