ترغب بنشر مسار تعليمي؟ اضغط هنا

Subgroup-based Rank-1 Lattice Quasi-Monte Carlo

131   0   0.0 ( 0 )
 نشر من قبل Yueming Lyu
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Quasi-Monte Carlo (QMC) is an essential tool for integral approximation, Bayesian inference, and sampling for simulation in science, etc. In the QMC area, the rank-1 lattice is important due to its simple operation, and nice properties for point set construction. However, the construction of the generating vector of the rank-1 lattice is usually time-consuming because of an exhaustive computer search. To address this issue, we propose a simple closed-form rank-1 lattice construction method based on group theory. Our method reduces the number of distinct pairwise distance values to generate a more regular lattice. We theoretically prove a lower and an upper bound of the minimum pairwise distance of any non-degenerate rank-1 lattice. Empirically, our methods can generate a near-optimal rank-1 lattice compared with the Korobov exhaustive search regarding the $l_1$-norm and $l_2$-norm minimum distance. Moreover, experimental results show that our method achieves superior approximation performance on benchmark integration test problems and kernel approximation problems.



قيم البحث

اقرأ أيضاً

We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design. Our analysis reveals higher-order consistent, deterministic choices of training points in the input data space for deep and shallow Neural Networks with holomorphic activation functions such as tanh. These novel training points are proved to facilitate higher-order decay (in terms of the number of training samples) of the underlying generalization error, with consistency error bounds that are free from the curse of dimensionality in the input data space, provided that DNN weights in hidden layers satisfy certain summability conditions. We present numerical experiments for DtO maps from elliptic and parabolic PDEs with uncertain inputs that confirm the theoretical analysis.
Statistical signal processing applications usually require the estimation of some parameters of interest given a set of observed data. These estimates are typically obtained either by solving a multi-variate optimization problem, as in the maximum li kelihood (ML) or maximum a posteriori (MAP) estimators, or by performing a multi-dimensional integration, as in the minimum mean squared error (MMSE) estimators. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and the Monte Carlo (MC) methodology is one feasible approach. MC methods proceed by drawing random samples, either from the desired distribution or from a simpler one, and using them to compute consistent estimators. The most important families of MC algorithms are Markov chain MC (MCMC) and importance sampling (IS). On the one hand, MCMC methods draw samples from a proposal density, building then an ergodic Markov chain whose stationary distribution is the desired distribution by accepting or rejecting those candidate samples as the new state of the chain. On the other hand, IS techniques draw samples from a simple proposal density, and then assign them suitable weights that measure their quality in some appropriate way. In this paper, we perform a thorough review of MC methods for the estimation of static parameters in signal processing applications. A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many of the most relevant MCMC and IS algorithms, and their combined use.
In this article we propose a novel MCMC method based on deterministic transformations T: X x D --> X where X is the state-space and D is some set which may or may not be a subset of X. We refer to our new methodology as Transformation-based Markov ch ain Monte Carlo (TMCMC). One of the remarkable advantages of our proposal is that even if the underlying target distribution is very high-dimensional, deterministic transformation of a one-dimensional random variable is sufficient to generate an appropriate Markov chain that is guaranteed to converge to the high-dimensional target distribution. Apart from clearly leading to massive computational savings, this idea of deterministically transforming a single random variable very generally leads to excellent acceptance rates, even though all the random variables associated with the high-dimensional target distribution are updated in a single block. Since it is well-known that joint updating of many random variables using Metropolis-Hastings (MH) algorithm generally leads to poor acceptance rates, TMCMC, in this regard, seems to provide a significant advance. We validate our proposal theoretically, establishing the convergence properties. Furthermore, we show that TMCMC can be very effectively adopted for simulating from doubly intractable distributions. TMCMC is compared with MH using the well-known Challenger data, demonstrating the effectiveness of of the former in the case of highly correlated variables. Moreover, we apply our methodology to a challenging posterior simulation problem associated with the geostatistical model of Diggle et al. (1998), updating 160 unknown parameters jointly, using a deterministic transformation of a one-dimensional random variable. Remarkable computational savings as well as good convergence properties and acceptance rates are the results.
88 - Zhiyan Ding , Qin Li 2021
The classical Langevin Monte Carlo method looks for samples from a target distribution by descending the samples along the gradient of the target distribution. The method enjoys a fast convergence rate. However, the numerical cost is sometimes high b ecause each iteration requires the computation of a gradient. One approach to eliminate the gradient computation is to employ the concept of ensemble. A large number of particles are evolved together so the neighboring particles provide gradient information to each other. In this article, we discuss two algorithms that integrate the ensemble feature into LMC and the associated properties. In particular, we find that if one directly surrogates the gradient using the ensemble approximation, the algorithm, termed Ensemble Langevin Monte Carlo, is unstable due to a high variance term. If the gradients are replaced by the ensemble approximations only in a constrained manner, to protect from the unstable points, the algorithm, termed Constrained Ensemble Langevin Monte Carlo, resembles the classical LMC up to an ensemble error but removes most of the gradient computation.
This is a supplement to the article Markov Chain Monte Carlo Based on Deterministic Transformations available at http://arxiv.org/abs/1106.5850

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا