ﻻ يوجد ملخص باللغة العربية
This paper introduces new efficient algorithms for two problems: sampling conditional on vertex degrees in unweighted graphs, and sampling conditional on vertex strengths in weighted graphs. The algorithms can sample conditional on the presence or absence of an arbitrary number of edges. The resulting conditional distributions provide the basis for exact tests. Existing samplers based on MCMC or sequential importance sampling are generally not scalable; their efficiency degrades in sparse graphs. MCMC methods usually require explicit computation of a Markov basis to navigate the complex state space; this is computationally intensive even for small graphs. We use state-dependent kernel selection to develop new MCMC samplers. These do not require a Markov basis, and are efficient both in sparse and dense graphs. The key idea is to intelligently select a Markov kernel on the basis of the current state of the chain. We apply our methods to testing hypotheses on a real network and contingency table. The algorithms appear orders of magnitude more efficient than existing methods in the test cases considered.
We establish verifiable conditions under which Metropolis Hastings (MH) algorithms with position-dependent proposal covariance matrix will or will not have geometric rate of convergence. Some of the diffusions based MH algorithms like Metropolis adju
Monte Carlo (MC) sampling methods are widely applied in Bayesian inference, system simulation and optimization problems. The Markov Chain Monte Carlo (MCMC) algorithms are a well-known class of MC methods which generate a Markov chain with the desire
Thompson sampling is a heuristic algorithm for the multi-armed bandit problem which has a long tradition in machine learning. The algorithm has a Bayesian spirit in the sense that it selects arms based on posterior samples of reward probabilities of
Determining the number G of components in a finite mixture distribution is an important and difficult inference issue. This is a most important question, because statistical inference about the resulting model is highly sensitive to the value of G. S
Under measurement constraints, responses are expensive to measure and initially unavailable on most of records in the dataset, but the covariates are available for the entire dataset. Our goal is to sample a relatively small portion of the dataset wh