ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian analysis of immune response dynamics with sparse time series data

60   0   0.0 ( 0 )
 نشر من قبل Mike West
 تاريخ النشر 2016
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In vaccine development, the temporal profiles of relative abundance of subtypes of immune cells (T-cells) is key to understanding vaccine efficacy. Complex and expensive experimental studies generate very sparse time series data on this immune response. Fitting multi-parameter dynamic models of the immune response dynamics-- central to evaluating mechanisms underlying vaccine efficacy-- is challenged by data sparsity. The research reported here addresses this challenge. For HIV/SIV vaccine studies in macaques, we: (a) introduce novel dynamic models of progression of cellular populations over time with relevant, time-delayed components reflecting the vaccine response; (b) define an effective Bayesian model fitting strategy that couples Markov chain Monte Carlo (MCMC) with Approximate Bayesian Computation (ABC)-- building on the complementary strengths of the two approaches, neither of which is effective alone; (c) explore questions of information content in the sparse time series for each of the model parameters, linking into experimental design and model simplification for future experiments; and (d) develop, apply and compare the analysis with samples from a recent HIV/SIV experiment, with novel insights and conclusions about the progressive response to the vaccine, and how this varies across subjects.



قيم البحث

اقرأ أيضاً

Many modern data sets require inference methods that can estimate the shared and individual-specific components of variability in collections of matrices that change over time. Promising methods have been developed to analyze these types of data in s tatic cases, but very few approaches are available for dynamic settings. To address this gap, we consider novel models and inference methods for pairs of matrices in which the columns correspond to multivariate observations at different time points. In order to characterize common and individual features, we propose a Bayesian dynamic factor modeling framework called Time Aligned Common and Individual Factor Analysis (TACIFA) that includes uncertainty in time alignment through an unknown warping function. We provide theoretical support for the proposed model, showing identifiability and posterior concentration. The structure enables efficient computation through a Hamiltonian Monte Carlo (HMC) algorithm. We show excellent performance in simulations, and illustrate the method through application to a social synchrony experiment.
While there is an increasing amount of literature about Bayesian time series analysis, only a few Bayesian nonparametric approaches to multivariate time series exist. Most methods rely on Whittles Likelihood, involving the second order structure of a stationary time series by means of its spectral density matrix. This is often modeled in terms of the Cholesky decomposition to ensure positive definiteness. However, asymptotic properties such as posterior consistency or posterior contraction rates are not known. A different idea is to model the spectral density matrix by means of random measures. This is in line with existing approaches for the univariate case, where the normalized spectral density is modeled similar to a probability density, e.g. with a Dirichlet process mixture of Beta densities. In this work, we present a related approach for multivariate time series, with matrix-valued mixture weights induced by a Hermitian positive definite Gamma process. The proposed procedure is shown to perform well for both simulated and real data. Posterior consistency and contraction rates are also established.
The Whittle likelihood is widely used for Bayesian nonparametric estimation of the spectral density of stationary time series. However, the loss of efficiency for non-Gaussian time series can be substantial. On the other hand, parametric methods are more powerful if the model is well-specified, but may fail entirely otherwise. Therefore, we suggest a nonparametric correction of a parametric likelihood taking advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. Using a Bernstein-Dirichlet prior for the nonparametric spectral correction, we show posterior consistency and illustrate the performance of our procedure in a simulation study and with LIGO gravitational wave data.
Our goal is to estimate causal interactions in multivariate time series. Using vector autoregressive (VAR) models, these can be defined based on non-vanishing coefficients belonging to respective time-lagged instances. As in most cases a parsimonious causality structure is assumed, a promising approach to causal discovery consists in fitting VAR models with an additional sparsity-promoting regularization. Along this line we here propose that sparsity should be enforced for the subgroups of coefficients that belong to each pair of time series, as the absence of a causal relation requires the coefficients for all time-lags to become jointly zero. Such behavior can be achieved by means of l1-l2-norm regularized regression, for which an efficient active set solver has been proposed recently. Our method is shown to outperform standard methods in recovering simulated causality graphs. The results are on par with a second novel approach which uses multiple statistical testing.
Adaptive collection of data is commonplace in applications throughout science and engineering. From the point of view of statistical inference however, adaptive data collection induces memory and correlation in the samples, and poses significant chal lenge. We consider the high-dimensional linear regression, where the samples are collected adaptively, and the sample size $n$ can be smaller than $p$, the number of covariates. In this setting, there are two distinct sources of bias: the first due to regularization imposed for consistent estimation, e.g. using the LASSO, and the second due to adaptivity in collecting the samples. We propose online debiasing, a general procedure for estimators such as the LASSO, which addresses both sources of bias. In two concrete contexts $(i)$ time series analysis and $(ii)$ batched data collection, we demonstrate that online debiasing optimally debiases the LASSO estimate when the underlying parameter $theta_0$ has sparsity of order $o(sqrt{n}/log p)$. In this regime, the debiased estimator can be used to compute $p$-values and confidence intervals of optimal size.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا