ترغب بنشر مسار تعليمي؟ اضغط هنا

Massive galaxy clusters are filled with a hot, turbulent and magnetized intra-cluster medium. Still forming under the action of gravitational instability, they grow in mass by accretion of supersonic flows. These flows partially dissipate into heat t hrough a complex network of large-scale shocks [1], while residual transonic flows create giant turbulent eddies and cascades [2,3]. Turbulence heats the intra-cluster medium [4] and also amplifies magnetic energy by way of dynamo action [5-8]. However, the pattern regulating the transformation of gravitational energy into kinetic, thermal, turbulent and magnetic energies remains unknown. Here we report that the energy components of the intra-cluster medium are ordered according to a permanent hierarchy, in which the ratio of thermal to turbulent to magnetic energy densities remains virtually unaltered throughout the clusters history, despite evolution of each individual component and the drive towards equipartition of the turbulent dynamo. This result revolves around the approximately constant efficiency of turbulence generation from the gravitational energy that is freed during mass accretion, revealed by our computational model of cosmological structure formation [3,9]. The permanent character of this hierarchy reflects yet another type of self-similarity in cosmology [10-13], while its structure, consistent with current data [14-18], encodes information about the efficiency of turbulent heating and dynamo action.
178 - D. Sornette 2015
Humankind is confronted with a nuclear stewardship curse, facing the prospect of needing to manage nuclear products over long time scales in the face of the short-time scales of human polities. I propose a super Manhattan-type effort to rejuvenate th e nuclear energy industry to overcome the current dead-end in which it finds itself, and by force, humankind has trapped itself in. A 1% GDP investment over a decade in the main nuclear countries could boost economic growth with a focus on the real world, epitomised by nuclear physics/chemistry/engineering/economics with well defined targets. By investing vigorously to obtain scientific and technological breakthroughs, we can create the spring of a world economic rebound based on new ways of exploiting nuclear energy, both more safely and more durably.
196 - Oliver Hahn 2015
N-body simulations are essential for understanding the formation and evolution of structure in the Universe. However, the discrete nature of these simulations affects their accuracy when modelling collisionless systems. We introduce a new approach to simulate the gravitational evolution of cold collisionless fluids by solving the Vlasov-Poisson equations in terms of adaptively refineable Lagrangian phase space elements. These geometrical elements are piecewise smooth maps between Lagrangian space and Eulerian phase space and approximate the continuum structure of the distribution function. They allow for dynamical adaptive splitting to accurately follow the evolution even in regions of very strong mixing. We discuss in detail various one-, two- and three-dimensional test problems to demonstrate the performance of our method. Its advantages compared to N-body algorithms are: i) explicit tracking of the fine-grained distribution function, ii) natural representation of caustics, iii) intrinsically smooth gravitational potential fields, thus iv) eliminating the need for any type of ad-hoc force softening. We show the potential of our method by simulating structure formation in a warm dark matter scenario. We discuss how spurious collisionality and large-scale discreteness noise of N-body methods are both strongly suppressed, which eliminates the artificial fragmentation of filaments. Therefore, we argue that our new approach improves on the N-body method when simulating self-gravitating cold and collisionless fluids, and is the first method that allows to explicitly follow the fine-grained evolution in six-dimensional phase space.
We use the Matryoshka run to study the time dependent statistics of structure-formation driven turbulence in the intracluster medium of a 10$^{15}M_odot$ galaxy cluster. We investigate the turbulent cascade in the inner Mpc for both compressional and incompressible velocity components. The flow maintains approximate conditions of fully developed turbulence, with departures thereof settling in about an eddy-turnover-time. Turbulent velocity dispersion remains above $700$ km s$^{-1}$ even at low mass accretion rate, with the fraction of compressional energy between 10% and 40%. Normalisation and slope of compressional turbulence is susceptible to large variations on short time scales, unlike the incompressible counterpart. A major merger occurs around redshift $zsimeq0$ and is accompanied by a long period of enhanced turbulence, ascribed to temporal clustering of mass accretion related to spatial clustering of matter. We test models of stochastic acceleration by compressional modes for the origin of diffuse radio emission in galaxy clusters. The turbulence simulation model constrains an important unknown of this complex problem and brings forth its dependence on the elusive micro-physics of the intracluster plasma. In particular, the specifics of the plasma collisionality and the dissipation physics of weak shocks affect the cascade of compressional modes with strong impact on the acceleration rates. In this context radio halos emerge as complex phenomena in which a hierarchy of processes acting on progressively smaller scales are at work. Stochastic acceleration by compressional modes implies statistical correlation of radio power and spectral index with merging cores distance, both testable in principle with radio surveys.
Understanding the velocity field is very important for modern cosmology: it gives insights to structure formation in general, and also its properties are crucial ingredients in modelling redshift-space distortions and in interpreting measurements of the kinetic Sunyaev-Zeldovich effect. Unfortunately, characterising the velocity field in cosmological N-body simulations is inherently complicated by two facts: i) The velocity field becomes manifestly multi-valued after shell-crossing and has discontinuities at caustics. This is due to the collisionless nature of dark matter. ii) N-body simulations sample the velocity field only at a set of discrete locations, with poor resolution in low-density regions. In this paper, we discuss how the associated problems can be circumvented by using a phase-space interpolation technique. This method provides extremely accurate estimates of the cosmic velocity fields and its derivatives, which can be properly defined without the need of the arbitrary coarse-graining procedure commonly used. We explore in detail the configuration-space properties of the cosmic velocity field on very large scales and in the highly nonlinear regime. In particular, we characterise the divergence and curl of the velocity field, present their one-point statistics, analyse the Fourier-space properties and provide fitting formulae for the velocity divergence bias relative to the non-linear matter power spectrum. We furthermore contrast some of the interesting differences in the velocity fields of warm and cold dark matter models. We anticipate that the high-precision measurements carried out here will help to understand in detail the dynamics of dark matter and the structures it forms.
We define a financial bubble as a period of unsustainable growth, when the price of an asset increases ever more quickly, in a series of accelerating phases of corrections and rebounds. More technically, during a bubble phase, the price follows a fas ter-than-exponential power law growth process, often accompanied by log-periodic oscillations. This dynamic ends abruptly in a change of regime that may be a crash or a substantial correction. Because they leave such specific traces, bubbles may be recognised in advance, that is, before they burst. In this paper, we will explain the mechanism behind financial bubbles in an intuitive way. We will show how the log-periodic power law emerges spontaneously from the complex system that financial markets are, as a consequence of feedback mechanisms, hierarchical structure and specific trading dynamics and investment styles. We argue that the risk of a major correction, or even a crash, becomes substantial when a bubble develops towards maturity, and that it is therefore very important to find evidence of bubbles and to follow their development from as early a stage as possible. The tools that are explained in this paper actually serve that purpose. They are at the core of the Financial Crisis Observatory at the ETH Zurich, where tens of thousands of assets are monitored on a daily basis. This allow us to have a continuous overview of emerging bubbles in the global financial markets. The companion report available as part of the Notenstein white paper series (2014) with the title ``Financial bubbles: mechanism, diagnostic and state of the World (Feb. 2014) presents a practical application of the methodology outlines in this article and describes our view of the status concerning positive and negative bubbles in the financial markets, as of the end of January 2014.
152 - D. Sornette 2014
This short review presents a selected history of the mutual fertilization between physics and economics, from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics c ompared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistic physics. Recent extensions in term of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it would not cover the dynamical field of agent based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the ``Emerging Market Intelligence hypothesis to reconcile the pervasive presence of ``noise traders with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets.
We use N-body simulations of star cluster evolution to explore the hypothesis that short-lived radioactive isotopes found in meteorites, such as 26-Al, were delivered to the Suns protoplanetary disc from a supernova at the epoch of Solar System forma tion. We cover a range of star cluster formation parameter space and model both clusters with primordial substructure, and those with smooth profiles. We also adopt different initial virial ratios - from cool, collapsing clusters to warm, expanding associations. In each cluster we place the same stellar population; the clusters each have 2100 stars, and contain one massive 25M_Sun star which is expected to explode as a supernova at about 6.6Myr. We determine the number of Solar (G)-type stars that are within 0.1 - 0.3pc of the 25M_Sun star at the time of the supernova, which is the distance required to enrich the protoplanetary disc with the 26-Al abundances found in meteorites. We then determine how many of these G-dwarfs are unperturbed `singletons; stars which are never in close binaries, nor suffer sub-100au encounters, and which also do not suffer strong dynamical perturbations. The evolution of a suite of twenty initially identical clusters is highly stochastic, with the supernova enriching over 10 G-dwarfs in some clusters, and none at all in others. Typically only ~25 per cent of clusters contain enriched, unperturbed singletons, and usually only 1 - 2 per cluster (from a total of 96 G-dwarfs in each cluster). The initial conditions for star formation do not strongly affect the results, although a higher fraction of supervirial (expanding) clusters would contain enriched G-dwarfs if the supernova occurred earlier than 6.6Myr. If we sum together simulations with identical initial conditions, then ~1 per cent of all G-dwarfs in our simulations are enriched, unperturbed singletons.
237 - Michael Dittmar 2013
The scientific data about the state of our planet, presented at the 2012 (Rio+20) summit, documented that todays human family lives even less sustainably than it did in 1992. The data indicate furthermore that the environmental impacts from our curre nt economic activities are so large, that we are approaching situations where potentially controllable regional problems can easily lead to uncontrollable global disasters. Assuming that (1) the majority of the human family, once adequately informed, wants to achieve a sustainable way of life and (2) that the development towards sustainability roadmap will be based on scientific principles, one must begin with unambiguous and quantifiable definitions of these goals. As will be demonstrated, the well known scientific method to define abstract and complex issues by their negation, satisfies these requirements. Following this new approach, it also becomes possible to decide if proposed and actual policies changes will make our way of life less unsustainable, and thus move us potentially into the direction of sustainability. Furthermore, if potentially dangerous tipping points are to be avoided, the transition roadmap must include some minimal speed requirements. Combining the negation method and the time evolution of that remaining natural capital in different domains, the transition speed for a development towards sustainability can be quantified at local, regional and global scales. The presented ideas allow us to measure the rate of natural capital depletion and the rate of restoration that will be required if humanity is to avoid reaching a sustainable future by a collapse transition.
We investigate the problem of predicting the halo mass function from the properties of the Lagrangian density field. We focus on a perturbation spectrum with a small-scale cut-off (as in warm dark matter cosmologies). This cut-off results in a strong suppression of low mass objects, providing additional leverage to rigorously test which perturbations collapse and to what mass. We find that all haloes are consistent with forming near peaks of the initial density field, with a strong correlation between proto-halo density and ellipticity. We demonstrate that, while standard excursion set theory with correlated steps completely fails to reproduce the mass function, the inclusion of the peaks constraint leads to the correct number of haloes but significantly underpredicts the masses of low-mass objects (with the predicted halo mass function at low masses behaving like dn/dln m ~ m^{2/3}). This prediction is very robust and cannot be easily altered within the framework of a single collapse barrier. The nature of collapse in the presence of a small-scale cut-off thus reveals that excursion set calculations require a more detailed understanding of the collapse-time of a general ellipsoidal perturbation to predict the ultimate collapsed mass of a peak -- a problem that has been hidden in the large abundance of small-scale structure in CDM. We demonstrate how this problem can be resolved within the excursion set framework.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا