ﻻ يوجد ملخص باللغة العربية
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universes ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing (or emph{model selection}) in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability.
The physical processes that determine the properties of our everyday world, and of the wider cosmos, are determined by some key numbers: the constants of micro-physics and the parameters that describe the expanding universe in which we have emerged.
This paper proposes various new analysis techniques for Bayes networks in which conditional probability tables (CPTs) may contain symbolic variables. The key idea is to exploit scalable and powerful techniques for synthesis problems in parametric Mar
Theory testing in the physical sciences has been revolutionized in recent decades by Bayesian approaches to probability theory. Here, I will consider Bayesian approaches to theory extensions, that is, theories like inflation which aim to provide a de
We evaluate the amount of fine-tuning in constrain
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain. If the target domain covers a smaller visual space tha