ترغب بنشر مسار تعليمي؟ اضغط هنا

Stochastic optimization for numerical evaluation of imprecise probabilities

265   0   0.0 ( 0 )
 نشر من قبل Nicholas Syring
 تاريخ النشر 2021
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In applications of imprecise probability, analysts must compute lower (or upper) expectations, defined as the infimum of an expectation over a set of parameter values. Monte Carlo methods consistently approximate expectations at fixed parameter values, but can be costly to implement in grid search to locate minima over large subsets of the parameter space. We investigate the use of stochastic iterative root-finding methods for efficiently computing lower expectations. In two examples we illustrate the use of various stochastic approximation methods, and demonstrate their superior performance in comparison to grid search.



قيم البحث

اقرأ أيضاً

We generalize standard credal set models for imprecise probabilities to include higher order credal sets -- confidences about confidences. In doing so, we specify how an agents higher order confidences (credal sets) update upon observing an event. Ou r model begins to address standard issues with imprecise probability models, like Dilation and Belief Inertia. We conjecture that when higher order credal sets contain all possible probability functions, then in the limiting case the highest order confidences converge to form a uniform distribution over the first order credal set, where we define uniformity in terms of the statistical distance metric (total variation distance). Finite simulation supports the conjecture. We further suggest that this convergence presents the total-variation-uniform distribution as a natural, privileged prior for statistical hypothesis testing.
Memristor based neural networks have great potentials in on-chip neuromorphic computing systems due to the fast computation and low-energy consumption. However, the imprecise properties of existing memristor devices generally result in catastrophic f ailures for the network in-situ training, which significantly impedes their engineering applications. In this work, we design a novel learning scheme that integrates stochastic sparse updating with momentum adaption (SSM) to efficiently train the imprecise memristor networks with high classification accuracy. The SSM scheme consists of: (1) a stochastic and discrete learning method to make weight updates sparse; (2) a momentum based gradient algorithm to eliminate training noises and distill robust updates; (3) a network re-initialization method to mitigate the device-to-device variation; (4) an update compensation strategy to further stabilize the weight programming process. With the SSM scheme, experiments show that the classification accuracy on multilayer perceptron (MLP) and convolutional neural network (CNN) improves from 26.12% to 90.07% and from 65.98% to 92.38%, respectively. Meanwhile, the total numbers of weight updating pulses decrease 90% and 40% in MLP and CNN, respectively, and the convergence rates are both 3x faster. The SSM scheme provides a high-accuracy, low-power, and fast-convergence solution for the in-situ training of imprecise memristor networks, which is crucial to future neuromorphic intelligence systems.
The generalized labeled multi-Bernoulli (GLMB) is a family of tractable models that alleviates the limitations of the Poisson family in dynamic Bayesian inference of point processes. In this paper, we derive closed form expressions for the void proba bility functional and the Cauchy-Schwarz divergence for GLMBs. The proposed analytic void probability functional is a necessary and sufficient statistic that uniquely characterizes a GLMB, while the proposed analytic Cauchy-Schwarz divergence provides a tractable measure of similarity between GLMBs. We demonstrate the use of both results on a partially observed Markov decision process for GLMBs, with Cauchy-Schwarz divergence based reward, and void probability constraint.
It is well known that Markov chain Monte Carlo (MCMC) methods scale poorly with dataset size. A popular class of methods for solving this issue is stochastic gradient MCMC. These methods use a noisy estimate of the gradient of the log posterior, whic h reduces the per iteration computational cost of the algorithm. Despite this, there are a number of results suggesting that stochastic gradient Langevin dynamics (SGLD), probably the most popular of these methods, still has computational cost proportional to the dataset size. We suggest an alternative log posterior gradient estimate for stochastic gradient MCMC, which uses control variates to reduce the variance. We analyse SGLD using this gradient estimate, and show that, under log-concavity assumptions on the target distribution, the computational cost required for a given level of accuracy is independent of the dataset size. Next we show that a different control variate technique, known as zero variance control variates can be applied to SGMCMC algorithms for free. This post-processing step improves the inference of the algorithm by reducing the variance of the MCMC output. Zero variance control variates rely on the gradient of the log posterior; we explore how the variance reduction is affected by replacing this with the noisy gradient estimate calculated by SGMCMC.
In this article, we present a new R package fc that provides a streamlined, standard evaluation-based approach to function composition. Using fc, a sequence of functions can be composed together such that returned objects from composed functions are used as intermediate values directly passed to the next function. Unlike with magrittr and purrr, no intermediate values need to be stored. When benchmarked, functions composed using fc achieve favorable runtimes in comparison to other implementations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا