No Arabic abstract
Understanding the universe is hampered by the elusiveness of its most common constituent, cold dark matter. Almost impossible to observe, dark matter can be studied effectively by means of simulation and there is probably no other research field where simulation has led to so much progress in the last decade. Cosmological N-body simulations are an essential tool for evolving density perturbations in the nonlinear regime. Simulating the formation of large-scale structures in the universe, however, is still a challenge due to the enormous dynamic range in spatial and temporal coordinates, and due to the enormous computer resources required. The dynamic range is generally dealt with by the hybridization of numerical techniques. We deal with the computational requirements by connecting two supercomputers via an optical network and make them operate as a single machine. This is challenging, if only for the fact that the supercomputers of our choice are separated by half the planet, as one is located in Amsterdam and the other is in Tokyo. The co-scheduling of the two computers and the gridification of the code enables us to achieve a 90% efficiency for this distributed intercontinental supercomputer.
We introduce a new set of large N-body runs, the MICE simulations, that provide a unique combination of very large cosmological volumes with good mass resolution. They follow the gravitational evolution of ~ 8.5 billion particles (2048^3) in volumes covering up to 450 (Gpc/h)^3. Our main goal is to accurately model and calibrate basic cosmological probes that will be used by upcoming astronomical surveys. Here we take advantage of the very large volumes of MICE to make a robust sampling of the high-mass tail of the halo mass function (MF). We discuss and avoid possible systematic effects in our study, and do a detailed analysis of different error estimators. We find that available fits to the local abundance of halos (Warren et al. (2006)) match well the abundance in MICE up to M ~ 10^{14}Msun, but significantly deviate for larger masses, underestimating the mass function by 10% (30%) at M = 3.16 x 10^{14}Msun (10^{15}Msun). Similarly, the widely used Sheth & Tormen (1999) fit, if extrapolated to high redshift assuming universality, leads to an underestimation of the cluster abundance by 30%, 20% and 15% at z=0, 0.5, 1 for M ~ [7 - 2.5 - 0.8] x 10^{14}Msun respectively ($ u = delta_c/sigma ~ 3$). We provide a re-calibration of the halo MF valid over 5 orders of magnitude in mass, 10^{10} < M/(Msun) < 10^{15}, that accurately describes its redshift evolution up to z=1. We explore the impact of this re-calibration on the determination of dark-energy, and conclude that using available fits may systematically bias the estimate of w by as much as 50% for medium-depth (z <= 1) surveys. MICE halo catalogues are publicly available at http://www.ice.cat/mice
We present the data and initial results from the first Pilot Survey of the Evolutionary Map of the Universe (EMU), observed at 944 MHz with the Australian Square Kilometre Array Pathfinder (ASKAP) telescope. The survey covers 270 sqdeg of an area covered by the Dark Energy Survey, reaching a depth of 25--30 ujybm rms at a spatial resolution of $sim$ 11--18 arcsec, resulting in a catalogue of $sim$ 220,000 sources, of which $sim$ 180,000 are single-component sources. Here we present the catalogue of single-component sources, together with (where available) optical and infrared cross-identifications, classifications, and redshifts. This survey explores a new region of parameter space compared to previous surveys. Specifically, the EMU Pilot Survey has a high density of sources, and also a high sensitivity to low surface-brightness emission. These properties result in the detection of types of sources that were rarely seen in or absent from previous surveys. We present some of these new results here.
We introduce the Illustris Project, a series of large-scale hydrodynamical simulations of galaxy formation. The highest resolution simulation, Illustris-1, covers a volume of $(106.5,{rm Mpc})^3$, has a dark mass resolution of ${6.26 times 10^{6},{rm M}_odot}$, and an initial baryonic matter mass resolution of ${1.26 times 10^{6},{rm M}_odot}$. At $z=0$ gravitational forces are softened on scales of $710,{rm pc}$, and the smallest hydrodynamical gas cells have an extent of $48,{rm pc}$. We follow the dynamical evolution of $2times 1820^3$ resolution elements and in addition passively evolve $1820^3$ Monte Carlo tracer particles reaching a total particle count of more than $18$ billion. The galaxy formation model includes: primordial and metal-line cooling with self-shielding corrections, stellar evolution, stellar feedback, gas recycling, chemical enrichment, supermassive black hole growth, and feedback from active galactic nuclei. At $z=0$ our simulation volume contains about $40,000$ well-resolved galaxies covering a diverse range of morphologies and colours including early-type, late-type and irregular galaxies. The simulation reproduces reasonably well the cosmic star formation rate density, the galaxy luminosity function, and baryon conversion efficiency at $z=0$. It also qualitatively captures the impact of galaxy environment on the red fractions of galaxies. The internal velocity structure of selected well-resolved disk galaxies obeys the stellar and baryonic Tully-Fisher relation together with flat circular velocity curves. In the well-resolved regime the simulation reproduces the observed mix of early-type and late-type galaxies. Our model predicts a halo mass dependent impact of baryonic effects on the halo mass function and the masses of haloes caused by feedback from supernova and active galactic nuclei.
We present a newly developed software package which implements a wide range of routines frequently used in Weak Gravitational Lensing (WL). With the continuously increasing size of the WL scientific community we feel that easy to use Application Program Interfaces (APIs) for common calculations are a necessity to ensure efficiency and coordination across different working groups. Coupled with existing open source codes, such as CAMB and Gadget2, LensTools brings together a cosmic shear simulation pipeline which, complemented with a variety of WL feature measurement tools and parameter sampling routines, provides easy access to the numerics for theoretical studies of WL as well as for experiment forecasts. Being implemented in python, LensTools takes full advantage of a range of state--of--the art techniques developed by the large and growing open--source software community (scipy,pandas,astropy,scikit-learn,emcee). We made the LensTools code available on the Python Package Index and published its documentation on http://lenstools.readthedocs.io
Curiously, our Universe was born in a low entropy state, with abundant free energy to power stars and life. The form that this free energy takes is usually thought to be gravitational: the Universe is almost perfectly smooth, and so can produce sources of energy as matter collapses under gravity. It has recently been argued that a more important source of low-entropy energy is nuclear: the Universe expands too fast to remain in nuclear statistical equilibrium (NSE), effectively shutting off nucleosynthesis in the first few minutes, providing leftover hydrogen as fuel for stars. Here, we fill in the astrophysical details of this scenario, and seek the conditions under which a Universe will emerge from early nucleosynthesis as almost-purely iron. In so doing, we identify a hitherto-overlooked character in the story of the origin of the second law: matter-antimatter asymmetry.