ترغب بنشر مسار تعليمي؟ اضغط هنا

Compressive Sampling Using EM Algorithm

339   0   0.0 ( 0 )
 نشر من قبل Atanu Ghosh KUMAR
 تاريخ النشر 2014
والبحث باللغة English




اسأل ChatGPT حول البحث

Conventional approaches of sampling signals follow the celebrated theorem of Nyquist and Shannon. Compressive sampling, introduced by Donoho, Romberg and Tao, is a new paradigm that goes against the conventional methods in data acquisition and provides a way of recovering signals using fewer samples than the traditional methods use. Here we suggest an alternative way of reconstructing the original signals in compressive sampling using EM algorithm. We first propose a naive approach which has certain computational difficulties and subsequently modify it to a new approach which performs better than the conventional methods of compressive sampling. The comparison of the different approaches and the performance of the new approach has been studied using simulated data.



قيم البحث

اقرأ أيضاً

This paper describes a data reduction technique in case of a markov chain of specified order. Instead of observing all the transitions in a markov chain we record only a few of them and treat the remaining part as missing. The decision about which tr ansitions to be filtered is taken before the observation process starts. Based on the filtered chain we try to estimate the parameters of the markov model using EM algorithm. In the first half of the paper we characterize a class of filtering mechanism for which all the parameters remain identifiable. In the later half we explain methods of estimation and testing about the transition probabilities of the markov chain based on the filtered data. The methods are first developed assuming a simple markov model with each probability of transition positive, but then generalized for models with structural zeroes in the transition probability matrix. Further extension is also done for multiple markov chains. The performance of the developed method of estimation is studied using simulated data along with a real life data.
A new robust stochastic volatility (SV) model having Student-t marginals is proposed. Our process is defined through a linear normal regression model driven by a latent gamma process that controls temporal dependence. This gamma process is strategica lly chosen to enable us to find an explicit expression for the pairwise joint density function of the Student-t response process. With this at hand, we propose a composite likelihood (CL) based inference for our model, which can be straightforwardly implemented with a low computational cost. This is a remarkable feature of our Student-t SV process over existing SV models in the literature that involve computationally heavy algorithms for estimating parameters. Aiming at a precise estimation of the parameters related to the latent process, we propose a CL Expectation-Maximization algorithm and discuss a bootstrap approach to obtain standard errors. The finite-sample performance of our composite likelihood methods is assessed through Monte Carlo simulations. The methodology is motivated by an empirical application in the financial market. We analyze the relationship, across multiple time periods, between various US sector Exchange-Traded Funds returns and individual companies stock price returns based on our novel Student-t model. This relationship is further utilized in selecting optimal financial portfolios.
233 - Olivier Cappe 2017
In this contribution, we propose a generic online (also sometimes called adaptive or recursive) version of the Expectation-Maximisation (EM) algorithm applicable to latent variable models of independent observations. Compared to the algorithm of Titt erington (1984), this approach is more directly connected to the usual EM algorithm and does not rely on integration with respect to the complete data distribution. The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i.e., that of the maximum likelihood estimator. In addition, the proposed approach is also suitable for conditional (or regression) models, as illustrated in the case of the mixture of linear regressions model.
Marginal maximum likelihood (MML) estimation is the preferred approach to fitting item response theory models in psychometrics due to the MML estimators consistency, normality, and efficiency as the sample size tends to infinity. However, state-of-th e-art MML estimation procedures such as the Metropolis-Hastings Robbins-Monro (MH-RM) algorithm as well as approximate MML estimation procedures such as variational inference (VI) are computationally time-consuming when the sample size and the number of latent factors are very large. In this work, we investigate a deep learning-based VI algorithm for exploratory item factor analysis (IFA) that is computationally fast even in large data sets with many latent factors. The proposed approach applies a deep artificial neural network model called an importance-weighted autoencoder (IWAE) for exploratory IFA. The IWAE approximates the MML estimator using an importance sampling technique wherein increasing the number of importance-weighted (IW) samples drawn during fitting improves the approximation, typically at the cost of decreased computational efficiency. We provide a real data application that recovers results aligning with psychological theory across random starts. Via simulation studies, we show that the IWAE yields more accurate estimates as either the sample size or the number of IW samples increases (although factor correlation and intercepts estimates exhibit some bias) and obtains similar results to MH-RM in less time. Our simulations also suggest that the proposed approach performs similarly to and is potentially faster than constrained joint maximum likelihood estimation, a fast procedure that is consistent when the sample size and the number of items simultaneously tend to infinity.
Nowadays, the confidentiality of data and information is of great importance for many companies and organizations. For this reason, they may prefer not to release exact data, but instead to grant researchers access to approximate data. For example, r ather than providing the exact income of their clients, they may only provide researchers with grouped data, that is, the number of clients falling in each of a set of non-overlapping income intervals. The challenge is to estimate the mean and variance structure of the hidden ungrouped data based on the observed grouped data. To tackle this problem, this work considers the exact observed data likelihood and applies the Expectation-Maximization (EM) and Monte-Carlo EM (MCEM) algorithms for cases where the hidden data follow a univariate, bivariate, or multivariate normal distribution. The results are then compared with the case of ignoring the grouping and applying regular maximum likelihood. The well-known Galton data and simulated datasets are used to evaluate the properties of the proposed EM and MCEM algorithms.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا