ترغب بنشر مسار تعليمي؟ اضغط هنا

Markov-chain Monte-Carlo Sampling for Optimal Fidelity Determination in Dynamic Decision-Making

81   0   0.0 ( 0 )
 نشر من قبل Sara Masoud
 تاريخ النشر 2019
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Decision making for dynamic systems is challenging due to the scale and dynamicity of such systems, and it is comprised of decisions at strategic, tactical, and operational levels. One of the most important aspects of decision making is incorporating real time information that reflects immediate status of the system. This type of decision making, which may apply to any dynamic system, needs to comply with the systems current capabilities and calls for a dynamic data driven planning framework. Performance of dynamic data driven planning frameworks relies on the decision making process which in return is relevant to the quality of the available data. This means that the planning framework should be able to set the level of decision making based on the current status of the system, which is learned through the continuous readings of sensory data. In this work, a Markov chain Monte Carlo sampling method is proposed to determine the optimal fidelity of decision making in a dynamic data driven framework. To evaluate the performance of the proposed method, an experiment is conducted, where the impact of workers performance on the production capacity and the fidelity level of decision making are studied.

قيم البحث

اقرأ أيضاً

142 - Guanyu Hu , HaiYing Wang 2020
Markov Chain Monte Carlo (MCMC) requires to evaluate the full data likelihood at different parameter values iteratively and is often computationally infeasible for large data sets. In this paper, we propose to approximate the log-likelihood with subs amples taken according to nonuniform subsampling probabilities, and derive the most likely optimal (MLO) subsampling probabilities for better approximation. Compared with existing subsampled MCMC algorithm with equal subsampling probabilities, our MLO subsampled MCMC has a higher estimation efficiency with the same subsampling ratio. We also derive a formula using the asymptotic distribution of the subsampled log-likelihood to determine the required subsample size in each MCMC iteration for a given level of precision. This formula is used to develop an adaptive version of the MLO subsampled MCMC algorithm. Numerical experiments demonstrate that the proposed method outperforms the uniform subsampled MCMC.
This work discusses the implementation of Markov Chain Monte Carlo (MCMC) sampling from an arbitrary Gaussian mixture model (GMM) within SRAM. We show a novel architecture of SRAM by embedding it with random number generators (RNGs), digital-to-analo g converters (DACs), and analog-to-digital converters (ADCs) so that SRAM arrays can be used for high performance Metropolis-Hastings (MH) algorithm-based MCMC sampling. Most of the expensive computations are performed within the SRAM and can be parallelized for high speed sampling. Our iterative compute flow minimizes data movement during sampling. We characterize power-performance trade-off of our design by simulating on 45 nm CMOS technology. For a two-dimensional, two mixture GMM, the implementation consumes ~ 91 micro-Watts power per sampling iteration and produces 500 samples in 2000 clock cycles on an average at 1 GHz clock frequency. Our study highlights interesting insights on how low-level hardware non-idealities can affect high-level sampling characteristics, and recommends ways to optimally operate SRAM within area/power constraints for high performance sampling.
106 - Vivekananda Roy 2019
Markov chain Monte Carlo (MCMC) is one of the most useful approaches to scientific computing because of its flexible construction, ease of use and generality. Indeed, MCMC is indispensable for performing Bayesian analysis. Two critical questions that MCMC practitioners need to address are where to start and when to stop the simulation. Although a great amount of research has gone into establishing convergence criteria and stopping rules with sound theoretical foundation, in practice, MCMC users often decide convergence by applying empirical diagnostic tools. This review article discusses the most widely used MCMC convergence diagnostic tools. Some recently proposed stopping rules with firm theoretical footing are also presented. The convergence diagnostics and stopping rules are illustrated using three detailed examples.
Boson sampling is a promising candidate for quantum supremacy. It requires to sample from a complicated distribution, and is trusted to be intractable on classical computers. Among the various classical sampling methods, the Markov chain Monte Carlo method is an important approach to the simulation and validation of boson sampling. This method however suffers from the severe sample loss issue caused by the autocorrelation of the sample sequence. Addressing this, we propose the sample caching Markov chain Monte Carlo method that eliminates the correlations among the samples, and prevents the sample loss at the meantime, allowing more efficient simulation of boson sampling. Moreover, our method can be used as a general sampling framework that can benefit a wide range of sampling tasks, and is particularly suitable for applications where a large number of samples are taken.
A number of problems in a variety of fields are characterised by target distributions with a multimodal structure in which the presence of several isolated local maxima dramatically reduces the efficiency of Markov Chain Monte Carlo sampling algorith ms. Several solutions, such as simulated tempering or the use of parallel chains, have been proposed to facilitate the exploration of the relevant parameter space. They provide effective strategies in the cases in which the dimension of the parameter space is small and/or the computational costs are not a limiting factor. These approaches fail however in the case of high-dimensional spaces where the multimodal structure is induced by degeneracies between regions of the parameter space. In this paper we present a fully Markovian way to efficiently sample this kind of distribution based on the general Delayed Rejection scheme with an arbitrary number of steps, and provide details for an efficient numerical implementation of the algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا