Do you want to publish a course? Click here

Comment: Gibbs Sampling, Exponential Families, and Orthogonal Polynomials

264   0   0.0 ( 0 )
 Added by Galin L. Jones
 Publication date 2008
and research's language is English




Ask ChatGPT about the research

Comment on ``Gibbs Sampling, Exponential Families, and Orthogonal Polynomials [arXiv:0808.3852]

rate research

Read More

Comment on ``On Random Scan Gibbs Samplers [arXiv:0808.3852]
128 - Ira M. Gessel , Jiang Zeng 2021
Starting from the moment sequences of classical orthogonal polynomials we derive the orthogonality purely algebraically. We consider also the moments of ($q=1$) classical orthogonal polynomials, and study those cases in which the exponential generating function has a nice form. In the opposite direction, we show that the generalized Dumont-Foata polynomials with six parameters are the moments of rescaled continuous dual Hahn polynomials.
The notion of multivariate total positivity has proved to be useful in finance and psychology but may be too restrictive in other applications. In this paper we propose a concept of local association, where highly connected components in a graphical model are positively associated and study its properties. Our main motivation comes from gene expression data, where graphical models have become a popular exploratory tool. The models are instances of what we term mixed convex exponential families and we show that a mixed dual likelihood estimator has simple exact properties for such families as well as asymptotic properties similar to the maximum likelihood estimator. We further relax the positivity assumption by penalizing negative partial correlations in what we term the positive graphical lasso. Finally, we develop a GOLAZO algorithm based on block-coordinate descent that applies to a number of optimization procedures that arise in the context of graphical models, including the estimation problems described above. We derive results on existence of the optimum for such problems.
The application of the lasso is espoused in high-dimensional settings where only a small number of the regression coefficients are believed to be nonzero. Moreover, statistical properties of high-dimensional lasso estimators are often proved under the assumption that the correlation between the predictors is bounded. In this vein, coordinatewise methods, the most common means of computing the lasso solution, work well in the presence of low to moderate multicollinearity. The computational speed of coordinatewise algorithms degrades however as sparsity decreases and multicollinearity increases. Motivated by these limitations, we propose the novel Deterministic Bayesian Lasso algorithm for computing the lasso solution. This algorithm is developed by considering a limiting version of the Bayesian lasso. The performance of the Deterministic Bayesian Lasso improves as sparsity decreases and multicollinearity increases, and can offer substantial increases in computational speed. A rigorous theoretical analysis demonstrates that (1) the Deterministic Bayesian Lasso algorithm converges to the lasso solution, and (2) it leads to a representation of the lasso estimator which shows how it achieves both $ell_1$ and $ell_2$ types of shrinkage simultaneously. Connections to other algorithms are also provided. The benefits of the Deterministic Bayesian Lasso algorithm are then illustrated on simulated and real data.
Cognitive diagnosis models (CDMs) are useful statistical tools to provide rich information relevant for intervention and learning. As a popular approach to estimate and make inference of CDMs, the Markov chain Monte Carlo (MCMC) algorithm is widely used in practice. However, when the number of attributes, $K$, is large, the existing MCMC algorithm may become time-consuming, due to the fact that $O(2^K)$ calculations are usually needed in the process of MCMC sampling to get the conditional distribution for each attribute profile. To overcome this computational issue, motivated by Culpepper and Hudson (2018), we propose a computationally efficient sequential Gibbs sampling method, which needs $O(K)$ calculations to sample each attribute profile. We use simulation and real data examples to show the good finite-sample performance of the proposed sequential Gibbs sampling, and its advantage over existing methods.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا