ترغب بنشر مسار تعليمي؟ اضغط هنا

Fine-Tuning the Odds in Bayesian Networks

76   0   0.0 ( 0 )
 نشر من قبل Bahare Salmani
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables. The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Markov chains. Our techniques are applicable to arbitrarily many, possibly dependent parameters that may occur in various CPTs. This lifts the severe restrictions on parameters, e.g., by restricting the number of parametrized CPTs to one or two, or by avoiding parameter dependencies between several CPTs, in existing works for parametric Bayes networks (pBNs). We describe how our techniques can be used for various pBN synthesis problems studied in the literature such as computing sensitivity functions (and values), simple and difference parameter tuning, ratio parameter tuning, and minimal change tuning. Experiments on several benchmarks show that our prototypical tool built on top of the probabilistic model checker Storm can handle several hundreds of parameters.

قيم البحث

اقرأ أيضاً

123 - Luke A. Barnes 2017
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universes ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing (or emph{model selection}) in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability.
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space tha n the source domain used for pre-training (e.g. ImageNet), the fine-tuned network is likely to be over-parameterized. However, applying network pruning as a post-processing step to reduce the memory requirements has drawbacks: fine-tuning and pruning are performed independently; pruning parameters are set once and cannot adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations. We propose a principled method for jointly fine-tuning and compressing a pre-trained convolutional network that overcomes these limitations. Experiments on two specialized image domains (remote sensing images and describable textures) demonstrate the validity of the proposed approach.
We study the performance of federated learning algorithms and their variants in an asymptotic framework. Our starting point is the formulation of federated learning as a multi-criterion objective, where the goal is to minimize each clients loss using information from all of the clients. We propose a linear regression model, where, for a given client, we theoretically compare the performance of various algorithms in the high-dimensional asymptotic limit. This asymptotic multi-criterion approach naturally models the high-dimensional, many-device nature of federated learning and suggests that personalization is central to federated learning. Our theory suggests that Fine-tuned Federated Averaging (FTFA), i.e., Federated Averaging followed by local training, and the ridge regularized variant Ridge-tuned Federated Averaging (RTFA) are competitive with more sophisticated meta-learning and proximal-regularized approaches. In addition to being conceptually simpler, FTFA and RTFA are computationally more efficient than its competitors. We corroborate our theoretical claims with extensive experiments on federat
91 - Yun Peng , Zhongli Ding 2012
This paper deals with the following problem: modify a Bayesian network to satisfy a given set of probability constraints by only change its conditional probability tables, and the probability distribution of the resulting network should be as close a s possible to that of the original network. We propose to solve this problem by extending IPFP (iterative proportional fitting procedure) to probability distributions represented by Bayesian networks. The resulting algorithm E-IPFP is further developed to D-IPFP, which reduces the computational cost by decomposing a global EIPFP into a set of smaller local E-IPFP problems. Limited analysis is provided, including the convergence proofs of the two algorithms. Computer experiments were conducted to validate the algorithms. The results are consistent with the theoretical analysis.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا