ترغب بنشر مسار تعليمي؟ اضغط هنا

Fine-Tuning in the Context of Bayesian Theory Testing

124   0   0.0 ( 0 )
 نشر من قبل Luke Barnes
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Luke A. Barnes




اسأل ChatGPT حول البحث

Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universes ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing (or emph{model selection}) in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability.

قيم البحث

اقرأ أيضاً

The physical processes that determine the properties of our everyday world, and of the wider cosmos, are determined by some key numbers: the constants of micro-physics and the parameters that describe the expanding universe in which we have emerged. We identify various steps in the emergence of stars, planets and life that are dependent on these fundamental numbers, and explore how these steps might have been changed, or completely prevented, if the numbers were different. We then outline some cosmological models where physical reality is vastly more extensive than the universe that astronomers observe (perhaps even involving many big bangs), which could perhaps encompass domains governed by different physics. Although the concept of a multiverse is still speculative, we argue that attempts to determine whether it exists constitute a genuinely scientific endeavor. If we indeed inhabit a multiverse, then we may have to accept that there can be no explanation other than anthropic reasoning for some features our world.
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables. The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Mar kov chains. Our techniques are applicable to arbitrarily many, possibly dependent parameters that may occur in various CPTs. This lifts the severe restrictions on parameters, e.g., by restricting the number of parametrized CPTs to one or two, or by avoiding parameter dependencies between several CPTs, in existing works for parametric Bayes networks (pBNs). We describe how our techniques can be used for various pBN synthesis problems studied in the literature such as computing sensitivity functions (and values), simple and difference parameter tuning, ratio parameter tuning, and minimal change tuning. Experiments on several benchmarks show that our prototypical tool built on top of the probabilistic model checker Storm can handle several hundreds of parameters.
244 - Luke A. Barnes 2017
Theory testing in the physical sciences has been revolutionized in recent decades by Bayesian approaches to probability theory. Here, I will consider Bayesian approaches to theory extensions, that is, theories like inflation which aim to provide a de eper explanation for some aspect of our models (in this case, the standard model of cosmology) that seem unnatural or fine-tuned. In particular, I will consider how cosmologists can test the multiverse using observations of this universe.
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space tha n the source domain used for pre-training (e.g. ImageNet), the fine-tuned network is likely to be over-parameterized. However, applying network pruning as a post-processing step to reduce the memory requirements has drawbacks: fine-tuning and pruning are performed independently; pruning parameters are set once and cannot adapt over time; and the highly parameterized nature of state-of-the-art pruning methods make it prohibitive to manually search the pruning parameter space for deep networks, leading to coarse approximations. We propose a principled method for jointly fine-tuning and compressing a pre-trained convolutional network that overcomes these limitations. Experiments on two specialized image domains (remote sensing images and describable textures) demonstrate the validity of the proposed approach.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا