Do you want to publish a course? Click here

Physico-mathematical foundations of relativistic cosmology

173   0   0.0 ( 0 )
 Added by Domingos Soares
 Publication date 2013
  fields Physics
and research's language is English




Ask ChatGPT about the research

I briefly present the foundations of relativistic cosmology, which are, General Relativity Theory and the Cosmological Principle. I discuss some relativistic models, namely, Einstein static universe and Friedmann universes. The classical bibliographic references for the relevant tensorial demonstrations are indicated whenever necessary, although the calculations themselves are not shown.



rate research

Read More

One-shot anonymous unselfishness in economic games is commonly explained by social preferences, which assume that people care about the monetary payoffs of others. However, during the last ten years, research has shown that different types of unselfish behaviour, including cooperation, altruism, truth-telling, altruistic punishment, and trustworthiness are in fact better explained by preferences for following ones own personal norms - internal standards about what is right or wrong in a given situation. Beyond better organising various forms of unselfish behaviour, this moral preference hypothesis has recently also been used to increase charitable donations, simply by means of interventions that make the morality of an action salient. Here we review experimental and theoretical work dedicated to this rapidly growing field of research, and in doing so we outline mathematical foundations for moral preferences that can be used in future models to better understand selfless human actions and to adjust policies accordingly. These foundations can also be used by artificial intelligence to better navigate the complex landscape of human morality.
A geometric setup for control theory is presented. The argument is developed through the study of the extremals of action functionals defined on piecewise differentiable curves, in the presence of differentiable non-holonomic constraints. Special emphasis is put on the tensorial aspects of the theory. To start with, the kinematical foundations, culminating in the so called variational equation, are put on geometrical grounds, via the introduction of the concept of infinitesimal control . On the same basis, the usual classification of the extremals of a variational problem into normal and abnormal ones is also rationalized, showing the existence of a purely kinematical algorithm assigning to each admissible curve a corresponding abnormality index, defined in terms of a suitable linear map. The whole machinery is then applied to constrained variational calculus. The argument provides an interesting revisitation of Pontryagin maximum principle and of the Erdmann-Weierstrass corner conditions, as well as a proof of the classical Lagrange multipliers method and a local interpretation of Pontryagins equations as dynamical equations for a free (singular) Hamiltonian system. As a final, highly non-trivial topic, a sufficient condition for the existence of finite deformations with fixed endpoints is explicitly stated and proved.
117 - Tony Leli`evre 2015
We present a review of recent works on the mathematical analysis of algorithms which have been proposed by A.F. Voter and co-workers in the late nineties in order to efficiently generate long trajectories of metastable processes. These techniques have been successfully applied in many contexts, in particular in the field of materials science. The mathematical analysis we propose relies on the notion of quasi stationary distribution.
96 - A.N. Gorban , I.Y. Tyukin 2018
The concentration of measure phenomena were discovered as the mathematical background of statistical mechanics at the end of the XIX - beginning of the XX century and were then explored in mathematics of the XX-XXI centuries. At the beginning of the XXI century, it became clear that the proper utilisation of these phenomena in machine learning might transform the curse of dimensionality into the blessing of dimensionality. This paper summarises recently discovered phenomena of measure concentration which drastically simplify some machine learning problems in high dimension, and allow us to correct legacy artificial intelligence systems. The classical concentration of measure theorems state that i.i.d. random points are concentrated in a thin layer near a surface (a sphere or equators of a sphere, an average or median level set of energy or another Lipschitz function, etc.). The new stochastic separation theorems describe the thin structure of these thin layers: the random points are not only concentrated in a thin layer but are all linearly separable from the rest of the set, even for exponentially large random sets. The linear functionals for separation of points can be selected in the form of the linear Fishers discriminant. All artificial intelligence systems make errors. Non-destructive correction requires separation of the situations (samples) with errors from the samples corresponding to correct behaviour by a simple and robust classifier. The stochastic separation theorems provide us by such classifiers and a non-iterative (one-shot) procedure for learning.
250 - M. Ibison 2008
The cosmological scale factor $a(t)$ of the flat-space Robertson-Walker geometry is examined from a Hamiltonian perspective wherein $a(t)$ is interpreted as an independent dynamical coordinate and the curvature density $sqrt {- g(a)} R({a,dot a,ddot a})$ is regarded as an action density in Minkowski spacetime. The resulting Hamiltonian for $a(t)$ is just the first Friedmann equation of the traditional approach (i.e. the Robertson-Walker cosmology of General Relativity), as might be expected. The utility of this approach however stems from the fact that each of the terms matter, radiation, and vacuum, and including the kinetic / gravitational field term, are formally energy densities, and the equation as a whole becomes a formal statement of energy conservation. An advantage of this approach is that it facilitates an intuitive understanding of energy balance and exchange on the cosmological scale that is otherwise absent in the traditional presentation. Each coordinate system has its own internally consistent explanation for how energy balance is achieved. For example, in the spacetime with line element $ds^2 = dt^2 - a^2(t) d{bf{x}}^2$, cosmological red-shift emerges as due to a post-recombination interaction between the scalar field $a(t)$ and the EM fields in which the latter loose energy as if propagating through a homogeneous lossy medium, with the energy lost to the scale factor helping drive the cosmological expansion.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا