Do you want to publish a course? Click here

Beyond Whittle: Nonparametric correction of a parametric likelihood with a focus on Bayesian time series analysis

94   0   0.0 ( 0 )
 Added by Claudia Kirch
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

The Whittle likelihood is widely used for Bayesian nonparametric estimation of the spectral density of stationary time series. However, the loss of efficiency for non-Gaussian time series can be substantial. On the other hand, parametric methods are more powerful if the model is well-specified, but may fail entirely otherwise. Therefore, we suggest a nonparametric correction of a parametric likelihood taking advantage of the efficiency of parametric models while mitigating sensitivities through a nonparametric amendment. Using a Bernstein-Dirichlet prior for the nonparametric spectral correction, we show posterior consistency and illustrate the performance of our procedure in a simulation study and with LIGO gravitational wave data.



rate research

Read More

While there is an increasing amount of literature about Bayesian time series analysis, only a few Bayesian nonparametric approaches to multivariate time series exist. Most methods rely on Whittles Likelihood, involving the second order structure of a stationary time series by means of its spectral density matrix. This is often modeled in terms of the Cholesky decomposition to ensure positive definiteness. However, asymptotic properties such as posterior consistency or posterior contraction rates are not known. A different idea is to model the spectral density matrix by means of random measures. This is in line with existing approaches for the univariate case, where the normalized spectral density is modeled similar to a probability density, e.g. with a Dirichlet process mixture of Beta densities. In this work, we present a related approach for multivariate time series, with matrix-valued mixture weights induced by a Hermitian positive definite Gamma process. The proposed procedure is shown to perform well for both simulated and real data. Posterior consistency and contraction rates are also established.
Many modern data sets require inference methods that can estimate the shared and individual-specific components of variability in collections of matrices that change over time. Promising methods have been developed to analyze these types of data in static cases, but very few approaches are available for dynamic settings. To address this gap, we consider novel models and inference methods for pairs of matrices in which the columns correspond to multivariate observations at different time points. In order to characterize common and individual features, we propose a Bayesian dynamic factor modeling framework called Time Aligned Common and Individual Factor Analysis (TACIFA) that includes uncertainty in time alignment through an unknown warping function. We provide theoretical support for the proposed model, showing identifiability and posterior concentration. The structure enables efficient computation through a Hamiltonian Monte Carlo (HMC) algorithm. We show excellent performance in simulations, and illustrate the method through application to a social synchrony experiment.
In vaccine development, the temporal profiles of relative abundance of subtypes of immune cells (T-cells) is key to understanding vaccine efficacy. Complex and expensive experimental studies generate very sparse time series data on this immune response. Fitting multi-parameter dynamic models of the immune response dynamics-- central to evaluating mechanisms underlying vaccine efficacy-- is challenged by data sparsity. The research reported here addresses this challenge. For HIV/SIV vaccine studies in macaques, we: (a) introduce novel dynamic models of progression of cellular populations over time with relevant, time-delayed components reflecting the vaccine response; (b) define an effective Bayesian model fitting strategy that couples Markov chain Monte Carlo (MCMC) with Approximate Bayesian Computation (ABC)-- building on the complementary strengths of the two approaches, neither of which is effective alone; (c) explore questions of information content in the sparse time series for each of the model parameters, linking into experimental design and model simplification for future experiments; and (d) develop, apply and compare the analysis with samples from a recent HIV/SIV experiment, with novel insights and conclusions about the progressive response to the vaccine, and how this varies across subjects.
We propose a Bayesian nonparametric approach to modelling and predicting a class of functional time series with application to energy markets, based on fully observed, noise-free functional data. Traders in such contexts conceive profitable strategies if they can anticipate the impact of their bidding actions on the aggregate demand and supply curves, which in turn need to be predicted reliably. Here we propose a simple Bayesian nonparametric method for predicting such curves, which take the form of monotonic bounded step functions. We borrow ideas from population genetics by defining a class of interacting particle systems to model the functional trajectory, and develop an implementation strategy which uses ideas from Markov chain Monte Carlo and approximate Bayesian computation techniques and allows to circumvent the intractability of the likelihood. Our approach shows great adaptation to the degree of smoothness of the curves and the volatility of the functional series, proves to be robust to an increase of the forecast horizon and yields an uncertainty quantification for the functional forecasts. We illustrate the model and discuss its performance with simulated datasets and on real data relative to the Italian natural gas market.
In this paper, we introduce a method for segmenting time series data using tools from Bayesian nonparametrics. We consider the task of temporal segmentation of a set of time series data into representative stationary segments. We use Gaussian process (GP) priors to impose our knowledge about the characteristics of the underlying stationary segments, and use a nonparametric distribution to partition the sequences into such segments, formulated in terms of a prior distribution on segment length. Given the segmentation, the model can be viewed as a variant of a Gaussian mixture model where the mixture components are described using the covariance function of a GP. We demonstrate the effectiveness of our model on synthetic data as well as on real time-series data of heartbeats where the task is to segment the indicative types of beats and to classify the heartbeat recordings into classes that correspond to healthy and abnormal heart sounds.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا