Do you want to publish a course? Click here

Practical Estimation of High Dimensional Stochastic Differential Mixed-Effects Models

142   0   0.0 ( 0 )
 Added by Umberto Picchini
 Publication date 2010
and research's language is English




Ask ChatGPT about the research

Stochastic differential equations (SDEs) are established tools to model physical phenomena whose dynamics are affected by random noise. By estimating parameters of an SDE intrinsic randomness of a system around its drift can be identified and separated from the drift itself. When it is of interest to model dynamics within a given population, i.e. to model simultaneously the performance of several experiments or subjects, mixed-effects modelling allows for the distinction of between and within experiment variability. A framework to model dynamics within a population using SDEs is proposed, representing simultaneously several sources of variation: variability between experiments using a mixed-effects approach and stochasticity in the individual dynamics using SDEs. These stochastic differential mixed-effects models have applications in e.g. pharmacokinetics/pharmacodynamics and biomedical modelling. A parameter estimation method is proposed and computational guidelines for an efficient implementation are given. Finally the method is evaluated using simulations from standard models like the two-dimensional Ornstein-Uhlenbeck (OU) and the square root models.



rate research

Read More

Stochastic differential equation mixed-effects models (SDEMEMs) are flexible hierarchical models that are able to account for random variability inherent in the underlying time-dynamics, as well as the variability between experimental units and, optionally, account for measurement error. Fully Bayesian inference for state-space SDEMEMs is performed, using data at discrete times that may be incomplete and subject to measurement error. However, the inference problem is complicated by the typical intractability of the observed data likelihood which motivates the use of sampling-based approaches such as Markov chain Monte Carlo. A Gibbs sampler is proposed to target the marginal posterior of all parameter values of interest. The algorithm is made computationally efficient through careful use of blocking strategies and correlated pseudo-marginal Metropolis-Hastings steps within the Gibbs scheme. The resulting methodology is flexible and is able to deal with a large class of SDEMEMs. The methodology is demonstrated on three case studies, including tumor growth dynamics and neuronal data. The gains in terms of increased computational efficiency are model and data dependent, but unless bespoke sampling strategies requiring analytical derivations are possible for a given model, we generally observe an efficiency increase of one order of magnitude when using correlated particle methods together with our blocked-Gibbs strategy.
We consider Bayesian inference for stochastic differential equation mixed effects models (SDEMEMs) exemplifying tumor response to treatment and regrowth in mice. We produce an extensive study on how a SDEMEM can be fitted using both exact inference based on pseudo-marginal MCMC and approximate inference via Bayesian synthetic likelihoods (BSL). We investigate a two-compartments SDEMEM, these corresponding to the fractions of tumor cells killed by and survived to a treatment, respectively. Case study data considers a tumor xenography study with two treatment groups and one control, each containing 5-8 mice. Results from the case study and from simulations indicate that the SDEMEM is able to reproduce the observed growth patterns and that BSL is a robust tool for inference in SDEMEMs. Finally, we compare the fit of the SDEMEM to a similar ordinary differential equation model. Due to small sample sizes, strong prior information is needed to identify all model parameters in the SDEMEM and it cannot be determined which of the two models is the better in terms of predicting tumor growth curves. In a simulation study we find that with a sample of 17 mice per group BSL is able to identify all model parameters and distinguish treatment groups.
Estimation of the precision matrix (or inverse covariance matrix) is of great importance in statistical data analysis. However, as the number of parameters scales quadratically with the dimension p, computation becomes very challenging when p is large. In this paper, we propose an adaptive sieving reduction algorithm to generate a solution path for the estimation of precision matrices under the $ell_1$ penalized D-trace loss, with each subproblem being solved by a second-order algorithm. In each iteration of our algorithm, we are able to greatly reduce the number of variables in the problem based on the Karush-Kuhn-Tucker (KKT) conditions and the sparse structure of the estimated precision matrix in the previous iteration. As a result, our algorithm is capable of handling datasets with very high dimensions that may go beyond the capacity of the existing methods. Moreover, for the sub-problem in each iteration, other than solving the primal problem directly, we develop a semismooth Newton augmented Lagrangian algorithm with global linear convergence on the dual problem to improve the efficiency. Theoretical properties of our proposed algorithm have been established. In particular, we show that the convergence rate of our algorithm is asymptotically superlinear. The high efficiency and promising performance of our algorithm are illustrated via extensive simulation studies and real data applications, with comparison to several state-of-the-art solvers.
Motivated by penalized likelihood maximization in complex models, we study optimization problems where neither the function to optimize nor its gradient have an explicit expression, but its gradient can be approximated by a Monte Carlo technique. We propose a new algorithm based on a stochastic approximation of the Proximal-Gradient (PG) algorithm. This new algorithm, named Stochastic Approximation PG (SAPG) is the combination of a stochastic gradient descent step which - roughly speaking - computes a smoothed approximation of the past gradient along the iterations, and a proximal step. The choice of the step size and the Monte Carlo batch size for the stochastic gradient descent step in SAPG are discussed. Our convergence results cover the cases of biased and unbiased Monte Carlo approximations. While the convergence analysis of the Monte Carlo-PG is already addressed in the literature (see Atchade et al. [2016]), the convergence analysis of SAPG is new. The two algorithms are compared on a linear mixed effect model as a toy example. A more challenging application is proposed on non-linear mixed effect models in high dimension with a pharmacokinetic data set including genomic covariates. To our best knowledge, our work provides the first convergence result of a numerical method designed to solve penalized Maximum Likelihood in a non-linear mixed effect model.
Stochastic differential equations and stochastic dynamics are good models to describe stochastic phenomena in real world. In this paper, we study N independent stochastic processes Xi(t) with real entries and the processes are determined by the stochastic differential equations with drift term relying on some random effects. We obtain the Girsanov-type formula of the stochastic differential equation driven by Fractional Brownian Motion through kernel transformation. Under some assumptions of the random effect, we estimate the parameter estimators by the maximum likelihood estimation and give some numerical simulations for the discrete observations. Results show that for the different H, the parameter estimator is closer to the true value as the amount of data increases.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا