Do you want to publish a course? Click here

A Universe that does not know the time

136   0   0.0 ( 0 )
 Added by Joao Magueijo
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this paper we propose that cosmological time is a quantum observable that does not commute with other quantum operators essential for the definition of cosmological states, notably the cosmological constant. This is inspired by properties of a measure of time---the Chern-Simons time---and the fact that in some theories it appears as a conjugate to the cosmological constant, with the two promoted to non-commuting quantum operators. Thus, the Universe may be delocalised in time: it does not {it know} the time, a property which opens up new cosmological scenarios, as well as invalidating several paradoxes, such as the timelike tower of turtles associated with an omnipresent time line. Alternatively, a Universe with a sharply defined clock time must have an indeterminate cosmological constant. The challenge then is to explain how islands of localized time may emerge, and give rise to localized histories. In some scenarios this is achieved by backward transitions in quantum time, cycling the Universe in something akin to a time machine cycle, with classical flow and quantum ebbing. The emergence on matter in a sea of Lambda probably provides the ballast behind classical behaviour.



rate research

Read More

Recently, the Thakurta metric has been adopted as a model of primordial black holes by several authors. We show that the spacetime described by this metric has neither black-hole event horizon nor black-hole trapping horizon and involves the violation of the null energy condition as a solution of the Einstein equation. Therefore, this metric does not describe a cosmological black hole in the early universe.
117 - Luke M. Butcher 2016
Debate persists as to whether the cosmological constant $Lambda$ can directly modify the power of a gravitational lens. With the aim of reestablishing a consensus on this issue, I conduct a comprehensive analysis of gravitational lensing in the Schwarzschild--de Sitter spacetime, wherein the effects of $Lambda$ should be most apparent. The effective lensing law is found to be in precise agreement with the $Lambda=0$ result: $alpha_mathrm{eff} = 4m/b_mathrm{eff}+15pi m^2/4b_mathrm{eff}^2 +O(m^3/b_mathrm{eff}^3)$, where the effective bending angle $alpha_mathrm{eff}$ and impact parameter $b_mathrm{eff}$ are defined by the angles and angular diameter distances measured by a comoving cosmological observer. [These observers follow the timelike geodesic congruence which (i) respects the continuous symmetries of the spacetime and (ii) approaches local isotropy most rapidly at large distance from the lens.] The effective lensing law can be derived using lensed or unlensed angular diameter distances, although the inherent ambiguity of unlensed distances generates an additional uncertainty $O(m^5/Lambda b_mathrm{eff}^7)$. I conclude that the cosmological constant does not interfere with the standard gravitational lensing formalism.
243 - Angelo Tartaglia 2012
This talk discusses various aspects of the structure of space-time presenting mechanisms leading to the explanation of the rigidity of the manifold and to the emergence of time, i.e. of the Lorentzian signature. The proposed ingredient is the analog, in four dimensions, of the deformation energy associated with the common threedimensional elasticity theory. The inclusion of this additional term in the Lagrangian of empty space-time accounts for gravity as an emergent feature from the microscopic structure of space-time. Once time has legitimately been introduced, a global positioning method based on local measurements of proper times between the arrivals of electromagnetic pulses from independent distant sources is presented. The method considers both pulsars as well as artificial emitters located on celestial bodies of the solar system as pulsating beacons to be used for navigation and positioning.
Models based on the Transformer architecture have achieved better accuracy than the ones based on competing architectures for a large set of tasks. A unique feature of the Transformer is its universal application of a self-attention mechanism, which allows for free information flow at arbitrary distances. Following a probabilistic view of the attention via the Gaussian mixture model, we find empirical evidence that the Transformer attention tends to explain away certain input neurons. To compensate for this, we propose a doubly-normalized attention scheme that is simple to implement and provides theoretical guarantees for avoiding the explaining away effect without introducing significant computational or memory cost. Empirically, we show that the new attention schemes result in improved performance on several well-known benchmarks.
I give a critical review of the holographic hypothesis, which posits that a universe with gravity can be described by a quantum field theory in fewer dimensions. I first recall how the idea originated from considerations on black hole thermodynamics and the so-called information paradox that arises when Hawking radiation is taken into account. String Quantum Gravity tried to solve the puzzle using the AdS/CFT correspondence, according to which a black hole in a 5-D anti-de Sitter space is like a flat 4-D field of particles and radiation. Although such an interesting holographic property, also called gauge/gravity duality, has never been proved rigorously, it has impulsed a number of research programs in fields as diverse as nuclear physics, condensed matter physics, general relativity and cosmology. I finally discuss the pros and cons of the holographic conjecture, and emphasizes the key role played by black holes for understanding quantum gravity and possible dualities between distant fields of theoretical physics.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا