ترغب بنشر مسار تعليمي؟ اضغط هنا

How well do cosmological simulations reproduce individual-halo properties?

106   0   0.0 ( 0 )
 نشر من قبل Michele Trenti
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English
 تأليف M. Trenti




اسأل ChatGPT حول البحث

Cosmological simulations of galaxy formation often rely on prescriptions for star formation and feedback that depend on halo properties such as halo mass, central over-density, and virial temperature. In this paper we address the convergence of individual halo properties, based on their number of particles N, focusing in particular on the mass of halos near the resolution limit of a simulation. While it has been established that the halo mass function is sampled on average down to N~30 particles, we show that individual halo properties exhibit significant scatter, and some systematic biases, as one approaches the resolution limit. We carry out a series of cosmological simulations using the Gadget2 and Enzo codes with N_p=64^3 to N_p=1024^3 total particles, keeping the same large-scale structure in the simulation box. We consider boxes from l_{box} = 8 Mpc/h to l_{box} = 512 Mpc/h to probe different halo masses and formation redshifts. We cross-identify dark matter halos in boxes at different resolutions and measure the scatter in their properties. The uncertainty in the mass of single halos depends on the number of particles (scaling approximately as N^{-1/3}), but the rarer the density peak, the more robust its identification. The virial radius of halos is very stable and can be measured without bias for halos with N>30. In contrast, the average density within a sphere containing 25% of the total halo mass is severely underestimated (by more than a factor 2) and the halo spin is moderately overestimated for N<100. If sub-grid physics is implemented upon a cosmological simulation, we recommend that rare halos (~3sigma peaks) be resolved with N>100 particles and common halos (~1sigma peaks) with N>400 particles to avoid excessive numerical noise and possible systematic biases in the results.



قيم البحث

اقرأ أيضاً

Cosmological N-body simulations have been a major tool of theorists for decades, yet many of the numerical issues that these simulations face are still unexplored. This paper measures numerical biases in these large, dark matter-only simulations that affect the properties of their dark matter haloes. We compare many simulation suites in order to provide several tools for simulators and analysts which help mitigate these biases. We summarise our comparisons with practical `convergence limits that can be applied to a wide range of halo properties, including halo properties which are traditionally overlooked by the testing literature. We also find that the halo properties predicted by different simulations can diverge from one another at unexpectedly high resolutions. We demonstrate that many halo properties depend strongly on force softening scale and that this dependence leads to much of the measured divergence between simulations. We offer an empirical model to estimate the impact of such effects on the rotation curves of a halo population. This model can serve as a template for future empirical models of the biases in other halo properties.
State-of-the-art summarization systems are trained and evaluated on massive datasets scraped from the web. Despite their prevalence, we know very little about the underlying characteristics (data noise, summarization complexity, etc.) of these datase ts, and how these affect system performance and the reliability of automatic metrics like ROUGE. In this study, we manually analyze 600 samples from three popular summarization datasets. Our study is driven by a six-class typology which captures different noise types (missing facts, entities) and degrees of summarization difficulty (extractive, abstractive). We follow with a thorough analysis of 27 state-of-the-art summarization models and 5 popular metrics, and report our key insights: (1) Datasets have distinct data quality and complexity distributions, which can be traced back to their collection process. (2) The performance of models and reliability of metrics is dependent on sample complexity. (3) Faithful summaries often receive low scores because of the poor diversity of references. We release the code, annotated data and model outputs.
120 - P. Anders 2009
N-body simulations are widely used to simulate the dynamical evolution of a variety of systems, among them star clusters. Much of our understanding of their evolution rests on the results of such direct N-body simulations. They provide insight in the structural evolution of star clusters, as well as into the occurrence of stellar exotica. Although the major pure N-body codes STARLAB/KIRA and NBODY4 are widely used for a range of applications, there is no thorough comparison study yet. Here we thoroughly compare basic quantities as derived from simulations performed either with STARLAB/KIRA or NBODY4. We construct a large number of star cluster models for various stellar mass function settings (but without stellar/binary evolution, primordial binaries, external tidal fields etc), evolve them in parallel with STARLAB/KIRA and NBODY4, analyse them in a consistent way and compare the averaged results quantitatively. For this quantitative comparison we develop a bootstrap algorithm for functional dependencies. We find an overall excellent agreement between the codes, both for the clusters structural and energy parameters as well as for the properties of the dynamically created binaries. However, we identify small differences, like in the energy conservation before core collapse and the energies of escaping stars, which deserve further studies. Our results reassure the comparability and the possibility to combine results from these two major N-body codes, at least for the purely dynamical models (i.e. without stellar/binary evolution) we performed. (abridged)
We perform a suite of multimass cosmological zoom simulations of individual dark matter halos and explore how to best select Lagrangian regions for resimulation without contaminating the halo of interest with low-resolution particles. Such contaminat ion can lead to significant errors in the gas distribution of hydrodynamical simulations, as we show. For a fixed Lagrange volume, we find that the chance of contamination increases systematically with the level of zoom. In order to avoid contamination, the Lagrangian volume selected for resimulation must increase monotonically with the resolution difference between parent box and the zoom region. We provide a simple formula for selecting Lagrangian regions (in units of the halo virial volume) as a function of the level of zoom required. We also explore the degree to which a halos Lagrangian volume correlates with other halo properties (concentration, spin, formation time, shape, etc.) and find no significant correlation. There is a mild correlation between Lagrange volume and environment, such that halos living in the most clustered regions have larger Lagrangian volumes. Nevertheless, selecting halos to be isolated is not the best way to ensure inexpensive zoom simulations. We explain how one can safely choose halos with the smallest Lagrangian volumes, which are the least expensive to resimulate, without biasing ones sample.
118 - P. Anders 2012
Most recent progress in understanding the dynamical evolution of star clusters relies on direct N-body simulations. Owing to the computational demands, and the desire to model more complex and more massive star clusters, hardware calculational accele rators, such as GRAPE special-purpose hardware or, more recently, GPUs (i.e. graphics cards), are generally utilised. In addition, simulations can be accelerated by adjusting parameters determining the calculation accuracy (i.e. changing the internal simulation time step used for each star). We extend our previous thorough comparison (Anders et al. 2009) of basic quantities as derived from simulations performed either with STARLAB/KIRA or NBODY6. Here we focus on differences arising from using different hardware accelerations (including the increasingly popular graphic card accelerations/GPUs) and different calculation accuracy settings. We use the large number of star cluster models (for a fixed stellar mass function, without stellar/binary evolution, primordial binaries, external tidal fields etc) already used in the previous paper, evolve them with STARLAB/KIRA (and NBODY6, where required), analyse them in a consistent way and compare the averaged results quantitatively. For this quantitative comparison, we apply the bootstrap algorithm for functional dependencies developed in our previous study. In general we find very high comparability of the simulation results, independent of the used computer hardware (including the hardware accelerators) and the used N-body code. For the tested accuracy settings we find that for reduced accuracy (i.e. time step at least a factor 2.5 larger than the standard setting) most simulation results deviate significantly from the results using standard settings. The remaining deviations are comprehensible and explicable.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا