ترغب بنشر مسار تعليمي؟ اضغط هنا

Parameter Estimation of Heavy-Tailed AR Model with Missing Data via Stochastic EM

110   0   0.0 ( 0 )
 نشر من قبل Junyan Liu
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

The autoregressive (AR) model is a widely used model to understand time series data. Traditionally, the innovation noise of the AR is modeled as Gaussian. However, many time series applications, for example, financial time series data, are non-Gaussian, therefore, the AR model with more general heavy-tailed innovations is preferred. Another issue that frequently occurs in time series is missing values, due to system data record failure or unexpected data loss. Although there are numerous works about Gaussian AR time series with missing values, as far as we know, there does not exist any work addressing the issue of missing data for the heavy-tailed AR model. In this paper, we consider this issue for the first time, and propose an efficient framework for parameter estimation from incomplete heavy-tailed time series based on a stochastic approximation expectation maximization (SAEM) coupled with a Markov Chain Monte Carlo (MCMC) procedure. The proposed algorithm is computationally cheap and easy to implement. The convergence of the proposed algorithm to a stationary point of the observed data likelihood is rigorously proved. Extensive simulations and real datasets analyses demonstrate the efficacy of the proposed framework.



قيم البحث

اقرأ أيضاً

Heavy-tailed metrics are common and often critical to product evaluation in the online world. While we may have samples large enough for Central Limit Theorem to kick in, experimentation is challenging due to the wide confidence interval of estimatio n. We demonstrate the pressure by running A/A simulations with customer spending data from a large-scale Ecommerce site. Solutions are then explored. On one front we address the heavy tail directly and highlight the often ignored nuances of winsorization. In particular, the legitimacy of false positive rate could be at risk. We are further inspired by the idea of robust statistics and introduce Huber regression as a better way to measure treatment effect. On another front covariates from pre-experiment period are exploited. Although they are independent to assignment and potentially explain the variation of response well, concerns are that models are learned against prediction error rather than the bias of parameter. We find the framework of orthogonal learning useful, matching not raw observations but residuals from two predictions, one towards the response and the other towards the assignment. Robust regression is readily integrated, together with cross-fitting. The final design is proven highly effective in driving down variance at the same time controlling bias. It is empowering our daily practice and hopefully can also benefit other applications in the industry.
Nowadays, the confidentiality of data and information is of great importance for many companies and organizations. For this reason, they may prefer not to release exact data, but instead to grant researchers access to approximate data. For example, r ather than providing the exact income of their clients, they may only provide researchers with grouped data, that is, the number of clients falling in each of a set of non-overlapping income intervals. The challenge is to estimate the mean and variance structure of the hidden ungrouped data based on the observed grouped data. To tackle this problem, this work considers the exact observed data likelihood and applies the Expectation-Maximization (EM) and Monte-Carlo EM (MCEM) algorithms for cases where the hidden data follow a univariate, bivariate, or multivariate normal distribution. The results are then compared with the case of ignoring the grouping and applying regular maximum likelihood. The well-known Galton data and simulated datasets are used to evaluate the properties of the proposed EM and MCEM algorithms.
We study stochastic convex optimization with heavy-tailed data under the constraint of differential privacy. Most prior work on this problem is restricted to the case where the loss function is Lipschitz. Instead, as introduced by Wang, Xiao, Devadas , and Xu, we study general convex loss functions with the assumption that the distribution of gradients has bounded $k$-th moments. We provide improved upper bounds on the excess population risk under approximate differential privacy of $tilde{O}left(sqrt{frac{d}{n}}+left(frac{d}{epsilon n}right)^{frac{k-1}{k}}right)$ and $tilde{O}left(frac{d}{n}+left(frac{d}{epsilon n}right)^{frac{2k-2}{k}}right)$ for convex and strongly convex loss functions, respectively. We also prove nearly-matching lower bounds under the constraint of pure differential privacy, giving strong evidence that our bounds are tight.
207 - Shuai Huang , Trac D. Tran 2020
1-bit compressive sensing aims to recover sparse signals from quantized 1-bit measurements. Designing efficient approaches that could handle noisy 1-bit measurements is important in a variety of applications. In this paper we use the approximate mess age passing (AMP) to achieve this goal due to its high computational efficiency and state-of-the-art performance. In AMP the signal of interest is assumed to follow some prior distribution, and its posterior distribution can be computed and used to recover the signal. In practice, the parameters of the prior distributions are often unknown and need to be estimated. Previous works tried to find the parameters that maximize either the measurement likelihood or the Bethe free entropy, which becomes increasingly difficult to solve in the case of complicated probability models. Here we propose to treat the parameters as unknown variables and compute their posteriors via AMP as well, so that the parameters and the signal can be recovered jointly. This leads to a much simpler way to perform parameter estimation compared to previous methods and enables us to work with noisy 1-bit measurements. We further extend the proposed approach to the general quantization noise model that outputs multi-bit measurements. Experimental results show that the proposed approach generally perform much better than the other state-of-the-art methods in the zero-noise and moderate-noise regimes, and outperforms them in most of the cases in the high-noise regime.
We consider the problem of selecting deterministic or stochastic models for a biological, ecological, or environmental dynamical process. In most cases, one prefers either deterministic or stochastic models as candidate models based on experience or subjective judgment. Due to the complex or intractable likelihood in most dynamical models, likelihood-based approaches for model selection are not suitable. We use approximate Bayesian computation for parameter estimation and model selection to gain further understanding of the dynamics of two epidemics of chronic wasting disease in mule deer. The main novel contribution of this work is that under a hierarchical model framework we compare three types of dynamical models: ordinary differential equation, continuous time Markov chain, and stochastic differential equation models. To our knowledge model selection between these types of models has not appeared previously. Since the practice of incorporating dynamical models into data models is becoming more common, the proposed approach may be very useful in a variety of applications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا