ترغب بنشر مسار تعليمي؟ اضغط هنا

Bayesian ACRONYM Tuning

113   0   0.0 ( 0 )
 نشر من قبل Nathan Wiebe
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We provide an algorithm that uses Bayesian randomized benchmarking in concert with a local optimizer, such as SPSA, to find a set of controls that optimizes that average gate fidelity. We call this method Bayesian ACRONYM tuning as a reference to the analogous ACRONYM tuning algorithm. Bayesian ACRONYM distinguishes itself in its ability to retain prior information from experiments that use nearby control parameters; whereas traditional ACRONYM tuning does not use such information and can require many more measurements as a result. We prove that such information reuse is possible under the relatively weak assumption that the true model parameters are Lipshitz-continuous functions of the control parameters. We also perform numerical experiments that demonstrate that over-rotation errors in single qubit gates can be automatically tuned from 88% to 99.95% average gate fidelity using less than 1kB of data and fewer than 20 steps of the optimizer.



قيم البحث

اقرأ أيضاً

Particle accelerators require constant tuning during operation to meet beam quality, total charge and particle energy requirements for use in a wide variety of physics, chemistry and biology experiments. Maximizing the performance of an accelerator f acility often necessitates multi-objective optimization, where operators must balance trade-offs between multiple objectives simultaneously, often using limited, temporally expensive beam observations. Usually, accelerator optimization problems are solved offline, prior to actual operation, with advanced beamline simulations and parallelized optimization methods (NSGA-II, Swarm Optimization). Unfortunately, it is not feasible to use these methods for online multi-objective optimization, since beam measurements can only be done in a serial fashion, and these optimization methods require a large number of measurements to converge to a useful solution.Here, we introduce a multi-objective Bayesian optimization scheme, which finds the full Pareto front of an accelerator optimization problem efficiently in a serialized manner and is thus a critical step towards practical online multi-objective optimization in accelerators.This method uses a set of Gaussian process surrogate models, along with a multi-objective acquisition function, which reduces the number of observations needed to converge by at least an order of magnitude over current methods.We demonstrate how this method can be modified to specifically solve optimization challenges posed by the tuning of accelerators.This includes the addition of optimization constraints, objective preferences and costs related to changing accelerator parameters.
123 - Luke A. Barnes 2017
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universes ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing (or emph{model selection}) in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability.
Many state estimation algorithms must be tuned given the state space process and observation models, the process and observation noise parameters must be chosen. Conventional tuning approaches rely on heuristic hand-tuning or gradient-based optimizat ion techniques to minimize a performance cost function. However, the relationship between tuned noise values and estimator performance is highly nonlinear and stochastic. Therefore, the tuning solutions can easily get trapped in local minima, which can lead to poor choices of noise parameters and suboptimal estimator performance. This paper describes how Bayesian Optimization (BO) can overcome these issues. BO poses optimization as a Bayesian search problem for a stochastic ``black box cost function, where the goal is to search the solution space to maximize the probability of improving the current best solution. As such, BO offers a principled approach to optimization-based estimator tuning in the presence of local minima and performance stochasticity. While extended Kalman filters (EKFs) are the main focus of this work, BO can be similarly used to tune other related state space filters. The method presented here uses performance metrics derived from normalized innovation squared (NIS) filter residuals obtained via sensor data, which renders knowledge of ground-truth states unnecessary. The robustness, accuracy, and reliability of BO-based tuning is illustrated on practical nonlinear state estimation problems,losed-loop aero-robotic control.
Acronyms are the short forms of phrases that facilitate conveying lengthy sentences in documents and serve as one of the mainstays of writing. Due to their importance, identifying acronyms and corresponding phrases (i.e., acronym identification (AI)) and finding the correct meaning of each acronym (i.e., acronym disambiguation (AD)) are crucial for text understanding. Despite the recent progress on this task, there are some limitations in the existing datasets which hinder further improvement. More specifically, limited size of manually annotated AI datasets or noises in the automatically created acronym identification datasets obstruct designing advanced high-performing acronym identification models. Moreover, the existing datasets are mostly limited to the medical domain and ignore other domains. In order to address these two limitations, we first create a manually annotated large AI dataset for scientific domain. This dataset contains 17,506 sentences which is substantially larger than previous scientific AI datasets. Next, we prepare an AD dataset for scientific domain with 62,441 samples which is significantly larger than the previous scientific AD dataset. Our experiments show that the existing state-of-the-art models fall far behind human-level performance on both datasets proposed by this work. In addition, we propose a new deep learning model that utilizes the syntactical structure of the sentence to expand an ambiguous acronym in a sentence. The proposed model outperforms the state-of-the-art models on the new AD dataset, providing a strong baseline for future research on this dataset.
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables. The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Mar kov chains. Our techniques are applicable to arbitrarily many, possibly dependent parameters that may occur in various CPTs. This lifts the severe restrictions on parameters, e.g., by restricting the number of parametrized CPTs to one or two, or by avoiding parameter dependencies between several CPTs, in existing works for parametric Bayes networks (pBNs). We describe how our techniques can be used for various pBN synthesis problems studied in the literature such as computing sensitivity functions (and values), simple and difference parameter tuning, ratio parameter tuning, and minimal change tuning. Experiments on several benchmarks show that our prototypical tool built on top of the probabilistic model checker Storm can handle several hundreds of parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا