Do you want to publish a course? Click here

107 - Ari Belenkiy 2016
In February 1700, Isaac Newton needed a precise tropical year to design a new universal calendar that would supersede the Gregorian one. However, 17th-Century astronomers were uncertain of the long-term variation in the inclination of the Earths axis and were suspicious of Ptolemys equinox observations. As a result, they produced a wide range of tropical years. Facing this problem, Newton attempted to compute the length of the year on his own, using the ancient equinox observations reported by a famous Greek astronomer Hipparchus of Rhodes, ten in number. Though Newton had a very thin sample of data, he obtained a tropical year only a few seconds longer than the correct length. The reason lies in Newtons application of a technique similar to modern regression analysis. Newton wrote down the first of the two so-called normal equations known from the ordinary least-squares (OLS) method. In that procedure, Newton seems to have been the first to employ the mean (average) value of the data set, while the other leading astronomers of the era (Tycho Brahe, Galileo, and Kepler) used the median. Fifty years after Newton, in 1750, Newtons method was rediscovered and enhanced by Tobias Mayer. Remarkably, the same regression method served with distinction in the late 1920s when the founding fathers of modern cosmology, Georges Lemaitre (1927), Edwin Hubble (1929), and Willem de Sitter (1930), employed it to derive the Hubble constant.
Logicians and philosophers of science have proposed various formal criteria for theoretical equivalence. In this paper, we examine two such proposals: definitional equivalence and categorical equivalence. In order to show precisely how these two well-known criteria are related to one another, we investigate an intermediate criterion called Morita equivalence.
The essay is devoted to the personality of the prominent theorist D.V. Volkov and his pioneer works in quantum field theory and elementary particle physics.
In this paper, we examine the relationship between general relativity and the theory of Einstein algebras. We show that according to a formal criterion for theoretical equivalence recently proposed by Halvorson (2012, 2015) and Weatherall (2015), the two are equivalent theories.
390 - G. R. Stewart 2015
The editors have dedicated this special issue on superconducting materials to Ted Geballe in honor of his numerous seminal contributions to the field of superconducting materials over more than 60 years, on the year of his 95th birthday. Here, as an executive summary, are just a few highlights of his research in superconductivity, leavened with some anecdotes, and ending with some of Teds general insights and words of wisdom.
98 - Sara Imari Walker 2015
A perplexing problem in understanding physical reality is why the universe seems comprehensible, and correspondingly why there should exist physical systems capable of comprehending it. In this essay I explore the possibility that rather than being an odd coincidence arising due to our strange position as passive (and even more strangely, conscious) observers in the cosmos, these two problems might be related and could be explainable in terms of fundamental physics. The perspective presented suggests a potential unified framework where, when taken together, comprehenders and comprehensibility are part of causal structure of physical reality, which is considered as a causal graph (network) connecting states that are physically possible. I argue that in some local regions, the most probable states are those that include physical systems which contain information encodings - such as mathematics, language and art - because these are the most highly connected to other possible states in this causal graph. Such physical systems include life and - of particular interest for the discussion of the place of math in physical reality - comprehenders capable of making mathematical sense of the world. Within this framework, the descent of math is an undirected outcome of the evolution of the universe, which will tend toward states that are increasingly connected to other possible states of the universe, a process greatly facilitated if some physical systems know the rules of the game. I therefore conclude that our ability to use mathematics to describe, and more importantly manipulate, the natural world may not be an anomaly or trick, but instead could provide clues to the underlying causal structure of physical reality.
195 - Moses Fayngold 2015
Some known relativistic paradoxes are reconsidered for closed spaces, using a simple geometric model. For two twins in a closed space, a real paradox seems to emerge when the traveling twin is moving uniformly along a geodesic and returns to the starting point without turning back. Accordingly, the reference frames (RF) of both twins seem to be equivalent, which makes the twin paradox irresolvable: each twin can claim to be at rest and therefore to have aged more than the partner upon their reunion. In reality, the paradox has the resolution in this case as well. Apart from distinction between the two RF with respect to actual forces in play, they can be distinguished by clock synchronization. A closed space singles out a truly stationary RF with single-valued global time; in all other frames, time is not a single-valued parameter. This implies that even uniform motion along a spatial geodesic in a compact space is not truly inertial, and there is an effective force on an object in such motion. Therefore, the traveling twin will age less upon circumnavigation than the stationary one, just as in flat space-time. Ironically, Relativity in this case emerges free of paradoxes at the price of bringing back the pre-Galilean concept of absolute rest. An example showing the absence of paradoxes is also considered for a more realistic case of a time-evolving closed space.
I give an epistemological analysis of the developments of relativistic cosmology from 1917 to 1966, based on the seminal articles by Einstein, de Sitter, Friedmann, Lemaitre, Hubble, Gamow and other historical figures of the field. It appears that most of the ingredients of the present-day standard cosmological model, including the acceleration of the expansion due to a repulsive dark energy, the interpretation of the cosmological constant as vacuum energy or the possible non-trivial topology of space, had been anticipated by Georges Lemaitre, although his articles remain mostly unquoted.
Yes. That is my polemical reply to the titular question in Travis Norsens self-styled polemical response to Howard Wisemans recent paper. Less polemically, I am pleased to see that on two of my positions --- that Bells 1964 theorem is different from Bells 1976 theorem, and that the former does not include Bells one-paragraph heuristic presentation of the EPR argument --- Norsen has made significant concessions. In his response, Norsen admits that Bells recapitulation of the EPR argument in [the relevant] paragraph leaves something to be desired, that it disappoints and is problematic. Moreover, Norsen makes other statements that imply, on the face of it, that he should have no objections to the title of my recent paper (The Two Bells Theorems of John Bell). My principle aim in writing that paper was to try to bridge the gap between two interpretational camps, whom I call operationalists and realists, by pointing out that they use the phrase Bells theorem to mean different things: his 1964 theorem (assuming locality and determinism) and his 1976 theorem (assuming local causality), respectively. Thus, it is heartening that at least one person from one side has taken one step on my bridge. That said, there are several issues of contention with Norsen, which we (the two authors) address after discussing the extent of our agreement with Norsen. The most significant issues are: the indefiniteness of the word locality prior to 1964; and the assumptions Einstein made in the paper quoted by Bell in 1964 and their relation to Bells theorem.
Bells theorem can refer to two different theorems that John Bell proved, the first in 1964 and the second in 1976. His 1964 theorem is the incompatibility of quantum phenomena with the joint assumptions of Locality and Predetermination. His 1976 theorem is their incompatibility with the single property of Local Causality. This is contrary to Bells own later assertions, that his 1964 theorem began with the assumption of Local Causality, even if not by that name. Although the two Bells theorems are logically equivalent, their assumptions are not. Hence, the earlier and later theorems suggest quite different conclusions, embraced by operationalists and realists, respectively. The key issue is whether Locality or Local Causality is the appropriate notion emanating from Relativistic Causality, and this rests on ones basic notion of causation. For operationalists the appropriate notion is what is here called the Principle of Agent-Causation, while for realists it is Reichenbachs Principle of common cause. By breaking down the latter into even more basic Postulates, it is possible to obtain a version of Bells theorem in which each camp could reject one assumption, happy that the remaining assumptions reflect its weltanschauung. Formulating Bells theorem in terms of causation is fruitful not just for attempting to reconcile the two camps, but also for better describing the ontology of different quantum interpretations and for more deeply understanding the implications of Bells marvellous work.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا