ترغب بنشر مسار تعليمي؟ اضغط هنا

An investigation into the Multiple Optimised Parameter Estimation and Data compression algorithm

119   0   0.0 ( 0 )
 نشر من قبل Philip Graff
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the use of the Multiple Optimised Parameter Estimation and Data compression algorithm (MOPED) for data compression and faster evaluation of likelihood functions. Since MOPED only guarantees maintaining the Fisher matrix of the likelihood at a chosen point, multimodal and some degenerate distributions will present a problem. We present examples of scenarios in which MOPED does faithfully represent the true likelihood but also cases in which it does not. Through these examples, we aim to define a set of criteria for which MOPED will accurately represent the likelihood and hence may be used to obtain a significant reduction in the time needed to calculate it. These criteria may involve the evaluation of the full likelihood function for comparison.



قيم البحث

اقرأ أيضاً

Bayesian inference involves two main computational challenges. First, in estimating the parameters of some model for the data, the posterior distribution may well be highly multi-modal: a regime in which the convergence to stationarity of traditional Markov Chain Monte Carlo (MCMC) techniques becomes incredibly slow. Second, in selecting between a set of competing models the necessary estimation of the Bayesian evidence for each is, by definition, a (possibly high-dimensional) integration over the entire parameter space; again this can be a daunting computational task, although new Monte Carlo (MC) integration algorithms offer solutions of ever increasing efficiency. Nested sampling (NS) is one such contemporary MC strategy targeted at calculation of the Bayesian evidence, but which also enables posterior inference as a by-product, thereby allowing simultaneous parameter estimation and model selection. The widely-used MultiNest algorithm presents a particularly efficient implementation of the NS technique for multi-modal posteriors. In this paper we discuss importance nested sampling (INS), an alternative summation of the MultiNest draws, which can calculate the Bayesian evidence at up to an order of magnitude higher accuracy than `vanilla NS with no change in the way MultiNest explores the parameter space. This is accomplished by treating as a (pseudo-)importance sample the totality of points collected by MultiNest, including those previously discarded under the constrained likelihood sampling of the NS algorithm. We apply this technique to several challenging test problems and compare the accuracy of Bayesian evidences obtained with INS against those from vanilla NS.
The Baikal-GVD is a large scale neutrino telescope being constructed in Lake Baikal. The majority of signal detected by the telescope are noise hits, caused primarily by the luminescence of the Baikal water. Separating noise hits from the hits produc ed by Cherenkov light emitted from the muon track is a challenging part of the muon event reconstruction. We present an algorithm that utilizes a known directional hit causality criterion to contruct a graph of hits and then use a clique-based technique to select the subset of signal hits.The algorithm was tested on realistic detector Monte-Carlo simulation for a wide range of muon energies and has proved to select a pure sample of PMT hits from Cherenkov photons while retaining above 90% of original signal.
The quantum multiparameter estimation is very different from the classical multiparameter estimation due to Heisenbergs uncertainty principle in quantum mechanics. When the optimal measurements for different parameters are incompatible, they cannot b e jointly performed. We find a correspondence relationship between the inaccuracy of a measurement for estimating the unknown parameter with the measurement error in the context of measurement uncertainty relations. Taking this correspondence relationship as a bridge, we incorporate Heisenbergs uncertainty principle into quantum multiparameter estimation by giving a tradeoff relation between the measurement inaccuracies for estimating different parameters. For pure quantum states, this tradeoff relation is tight, so it can reveal the true quantum limits on individual estimation errors in such cases. We apply our approach to derive the tradeoff between attainable errors of estimating the real and imaginary parts of a complex signal encoded in coherent states and obtain the joint measurements attaining the tradeoff relation. We also show that our approach can be readily used to derive the tradeoff between the errors of jointly estimating the phase shift and phase diffusion without explicitly parameterizing quantum measurements.
94 - Ewan Cameron 2010
I present a critical review of techniques for estimating confidence intervals on binomial population proportions inferred from success counts in small-to-intermediate samples. Population proportions arise frequently as quantities of interest in astro nomical research; for instance, in studies aiming to constrain the bar fraction, AGN fraction, SMBH fraction, merger fraction, or red sequence fraction from counts of galaxies exhibiting distinct morphological features or stellar populations. However, two of the most widely-used techniques for estimating binomial confidence intervals--the normal approximation and the Clopper & Pearson approach--are liable to misrepresent the degree of statistical uncertainty present under sampling conditions routinely encountered in astronomical surveys, leading to an ineffective use of the experimental data (and, worse, an inefficient use of the resources expended in obtaining that data). Hence, I provide here an overview of the fundamentals of binomial statistics with two principal aims: (i) to reveal the ease with which (Bayesian) binomial confidence intervals with more satisfactory behaviour may be estimated from the quantiles of the beta distribution using modern mathematical software packages (e.g. R, matlab, mathematica, IDL, python); and (ii) to demonstrate convincingly the major flaws of both the normal approximation and the Clopper & Pearson approach for error estimation.
67 - Lijun Peng , Xiao-Ming Lu 2020
The basic idea behind Rayleighs criterion on resolving two incoherent optical point sources is that the overlap between the spatial modes from different sources would reduce the estimation precision for the locations of the sources, dubbed Rayleighs curse. We generalize the concept of Rayleighs curse to the abstract problems of quantum parameter estimation with incoherent sources. To manifest the effect of Rayleighs curse on quantum parameter estimation, we define the curse matrix in terms of quantum Fisher information and introduce the global and local immunity to the curse accordingly. We further derive the expression for the curse matrix and give the necessary and sufficient condition on the immunity to Rayleighs curse. For estimating the one-dimensional location parameters with a common initial state, we demonstrate that the global immunity to the curse on quantum Fisher information is impossible for more than two sources.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا