ﻻ يوجد ملخص باللغة العربية
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables. The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Markov chains. Our techniques are applicable to arbitrarily many, possibly dependent parameters that may occur in various CPTs. This lifts the severe restrictions on parameters, e.g., by restricting the number of parametrized CPTs to one or two, or by avoiding parameter dependencies between several CPTs, in existing works for parametric Bayes networks (pBNs). We describe how our techniques can be used for various pBN synthesis problems studied in the literature such as computing sensitivity functions (and values), simple and difference parameter tuning, ratio parameter tuning, and minimal change tuning. Experiments on several benchmarks show that our prototypical tool built on top of the probabilistic model checker Storm can handle several hundreds of parameters.
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space tha
We study the performance of federated learning algorithms and their variants in an asymptotic framework. Our starting point is the formulation of federated learning as a multi-criterion objective, where the goal is to minimize each clients loss using
This paper deals with the following problem: modify a Bayesian network to satisfy a given set of probability constraints by only change its conditional probability tables, and the probability distribution of the resulting network should be as close a
We evaluate the amount of fine-tuning in constrain