No Arabic abstract
Typicality arguments attempt to use the Copernican Principle to draw conclusions about the cosmos and presently unknown conscious beings within it. The most notorious is the Doomsday Argument, which purports to constrain humanitys future from its current lifespan alone. These arguments rest on a likelihood calculation that penalizes models in proportion to the number of distinguishable observers. I argue that such reasoning leads to solipsism, the belief that one is the only being in the world, and is therefore unacceptable. Using variants of the Sleeping Beauty thought experiment as a guide, I present a framework for evaluating observations in a large cosmos: Fine Graining with Auxiliary Indexicals (FGAI). FGAI requires the construction of specific models of physical outcomes and observations. Valid typicality arguments then emerge from the combinatorial properties of third-person physical microhypotheses. Indexical (observer-relative) facts do not directly constrain physical theories. Instead they serve to weight different provisional evaluations of credence. These weights define a probabilistic reference class of locations. As indexical knowledge changes, the weights shift. I show that the self-applied Doomsday Argument fails in FGAI, even though it can work for an external observer. I also discuss how FGAI could handle observations in large universes with Boltzmann brains.
The physical processes that determine the properties of our everyday world, and of the wider cosmos, are determined by some key numbers: the constants of micro-physics and the parameters that describe the expanding universe in which we have emerged. We identify various steps in the emergence of stars, planets and life that are dependent on these fundamental numbers, and explore how these steps might have been changed, or completely prevented, if the numbers were different. We then outline some cosmological models where physical reality is vastly more extensive than the universe that astronomers observe (perhaps even involving many big bangs), which could perhaps encompass domains governed by different physics. Although the concept of a multiverse is still speculative, we argue that attempts to determine whether it exists constitute a genuinely scientific endeavor. If we indeed inhabit a multiverse, then we may have to accept that there can be no explanation other than anthropic reasoning for some features our world.
We discuss the reception of Copernican astronomy by the Provenc{c}al humanists of the XVIth-XVIIth centuries, beginning with Michel de Montaigne who was the first to recognize the potential scientific and philosophical revolution represented by heliocentrism. Then we describe how, after Keplers Astronomia Nova of 1609 and the first telescopic observations by Galileo, it was in the south of France that the New Astronomy found its main promotors with the humanists and amateurs eclaires, Nicolas-Claude Fabri de Peiresc and Pierre Gassendi. The professional astronomer Jean-Dominique Cassini, also from Provence, would later elevate the field to new heights in Paris.
We address a recent proposal concerning surplus structure due to Nguyen et al. [Why Surplus Structure is Not Superfluous. Br. J. Phi. Sci. Forthcoming.] We argue that the sense of surplus structure captured by their formal criterion is importantly different from---and in a sense, opposite to---another sense of surplus structure used by philosophers. We argue that minimizing structure in one sense is generally incompatible with minimizing structure in the other sense. We then show how these distinctions bear on Nguyen et al.s arguments about Yang-Mills theory and on the hole argument.
The celu of the philosophical literature on the hole argument is the 1987 paper by Earman & Norton [What Price Space-time Substantivalism? The Hole Story Br. J. Phil. Sci.]. This paper has a well-known back-story, concerning work by Stachel and Norton on Einsteins thinking in the years 1913-15. Less well-known is a connection between the hole argument and Earmans work on Leibniz in the 1970s and 1980s, which in turn can be traced to an argument first presented in 1975 by Howard Stein. Remarkably, this thread originates with a misattribution: the argument Earman attributes to Stein, which ultimately morphs into the hole argument, was not the argument Stein gave. The present paper explores this episode and presents some reflections on how it bears on the subsequent literature.
Fine-tuning in physics and cosmology is often used as evidence that a theory is incomplete. For example, the parameters of the standard model of particle physics are unnaturally small (in various technical senses), which has driven much of the search for physics beyond the standard model. Of particular interest is the fine-tuning of the universe for life, which suggests that our universes ability to create physical life forms is improbable and in need of explanation, perhaps by a multiverse. This claim has been challenged on the grounds that the relevant probability measure cannot be justified because it cannot be normalized, and so small probabilities cannot be inferred. We show how fine-tuning can be formulated within the context of Bayesian theory testing (or emph{model selection}) in the physical sciences. The normalizability problem is seen to be a general problem for testing any theory with free parameters, and not a unique problem for fine-tuning. Physical theories in fact avoid such problems in one of two ways. Dimensional parameters are bounded by the Planck scale, avoiding troublesome infinities, and we are not compelled to assume that dimensionless parameters are distributed uniformly, which avoids non-normalizability.