ترغب بنشر مسار تعليمي؟ اضغط هنا

Data Reduction in Markov model using EM algorithm

167   0   0.0 ( 0 )
 نشر من قبل Atanu Ghosh Kumar
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper describes a data reduction technique in case of a markov chain of specified order. Instead of observing all the transitions in a markov chain we record only a few of them and treat the remaining part as missing. The decision about which transitions to be filtered is taken before the observation process starts. Based on the filtered chain we try to estimate the parameters of the markov model using EM algorithm. In the first half of the paper we characterize a class of filtering mechanism for which all the parameters remain identifiable. In the later half we explain methods of estimation and testing about the transition probabilities of the markov chain based on the filtered data. The methods are first developed assuming a simple markov model with each probability of transition positive, but then generalized for models with structural zeroes in the transition probability matrix. Further extension is also done for multiple markov chains. The performance of the developed method of estimation is studied using simulated data along with a real life data.



قيم البحث

اقرأ أيضاً

Conventional approaches of sampling signals follow the celebrated theorem of Nyquist and Shannon. Compressive sampling, introduced by Donoho, Romberg and Tao, is a new paradigm that goes against the conventional methods in data acquisition and provid es a way of recovering signals using fewer samples than the traditional methods use. Here we suggest an alternative way of reconstructing the original signals in compressive sampling using EM algorithm. We first propose a naive approach which has certain computational difficulties and subsequently modify it to a new approach which performs better than the conventional methods of compressive sampling. The comparison of the different approaches and the performance of the new approach has been studied using simulated data.
A new robust stochastic volatility (SV) model having Student-t marginals is proposed. Our process is defined through a linear normal regression model driven by a latent gamma process that controls temporal dependence. This gamma process is strategica lly chosen to enable us to find an explicit expression for the pairwise joint density function of the Student-t response process. With this at hand, we propose a composite likelihood (CL) based inference for our model, which can be straightforwardly implemented with a low computational cost. This is a remarkable feature of our Student-t SV process over existing SV models in the literature that involve computationally heavy algorithms for estimating parameters. Aiming at a precise estimation of the parameters related to the latent process, we propose a CL Expectation-Maximization algorithm and discuss a bootstrap approach to obtain standard errors. The finite-sample performance of our composite likelihood methods is assessed through Monte Carlo simulations. The methodology is motivated by an empirical application in the financial market. We analyze the relationship, across multiple time periods, between various US sector Exchange-Traded Funds returns and individual companies stock price returns based on our novel Student-t model. This relationship is further utilized in selecting optimal financial portfolios.
Nowadays, the confidentiality of data and information is of great importance for many companies and organizations. For this reason, they may prefer not to release exact data, but instead to grant researchers access to approximate data. For example, r ather than providing the exact income of their clients, they may only provide researchers with grouped data, that is, the number of clients falling in each of a set of non-overlapping income intervals. The challenge is to estimate the mean and variance structure of the hidden ungrouped data based on the observed grouped data. To tackle this problem, this work considers the exact observed data likelihood and applies the Expectation-Maximization (EM) and Monte-Carlo EM (MCEM) algorithms for cases where the hidden data follow a univariate, bivariate, or multivariate normal distribution. The results are then compared with the case of ignoring the grouping and applying regular maximum likelihood. The well-known Galton data and simulated datasets are used to evaluate the properties of the proposed EM and MCEM algorithms.
132 - Suvra Pal 2020
In this paper, a long-term survival model under competing risks is considered. The unobserved number of competing risks is assumed to follow a negative binomial distribution that can capture both over- and under-dispersion. Considering the latent com peting risks as missing data, a variation of the well-known expectation maximization (EM) algorithm, called the stochastic EM algorithm (SEM), is developed. It is shown that the SEM algorithm avoids calculation of complicated expectations, which is a major advantage of the SEM algorithm over the EM algorithm. The proposed procedure also allows the objective function to be split into two simpler functions, one corresponding to the parameters associated with the cure rate and the other corresponding to the parameters associated with the progression times. The advantage of this approach is that each simple function, with lower parameter dimension, can be maximized independently. An extensive Monte Carlo simulation study is carried out to compare the performances of the SEM and EM algorithms. Finally, a breast cancer survival data is analyzed and it is shown that the SEM algorithm performs better than the EM algorithm.
A novel approach to perform unsupervised sequential learning for functional data is proposed. Our goal is to extract reference shapes (referred to as templates) from noisy, deformed and censored realizations of curves and images. Our model generalize s the Bayesian dense deformable template model (Allassonni`ere et al., 2007), a hierarchical model in which the template is the function to be estimated and the deformation is a nuisance, assumed to be random with a known prior distribution. The templates are estimated using a Monte Carlo version of the online Expectation-Maximization algorithm, extending the work from Cappe and Moulines (2009). Our sequential inference framework is significantly more computationally efficient than equivalent batch learning algorithms, especially when the missing data is high-dimensional. Some numerical illustrations on curve registration problem and templates extraction from images are provided to support our findings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا