ترغب بنشر مسار تعليمي؟ اضغط هنا

This paper describes the open-source code Enzo, which uses block-structured adaptive mesh refinement to provide high spatial and temporal resolution for modeling astrophysical fluid flows. The code is Cartesian, can be run in 1, 2, and 3 dimensions, and supports a wide variety of physics including hydrodynamics, ideal and non-ideal magnetohydrodynamics, N-body dynamics (and, more broadly, self-gravity of fluids and particles), primordial gas chemistry, optically-thin radiative cooling of primordial and metal-enriched plasmas (as well as some optically-thick cooling models), radiation transport, cosmological expansion, and models for star formation and feedback in a cosmological context. In addition to explaining the algorithms implemented, we present solutions for a wide range of test problems, demonstrate the codes parallel performance, and discuss the Enzo collaborations code development methodology.
64 - Stephen Skory 2010
Modern N-body cosmological simulations contain billions ($10^9$) of dark matter particles. These simulations require hundreds to thousands of gigabytes of memory, and employ hundreds to tens of thousands of processing cores on many compute nodes. In order to study the distribution of dark matter in a cosmological simulation, the dark matter halos must be identified using a halo finder, which establishes the halo membership of every particle in the simulation. The resources required for halo finding are similar to the requirements for the simulation itself. In particular, simulations have become too extensive to use commonly-employed halo finders, such that the computational requirements to identify halos must now be spread across multiple nodes and cores. Here we present a scalable-parallel halo finding method called Parallel HOP for large-scale cosmological simulation data. Based on the halo finder HOP, it utilizes MPI and domain decomposition to distribute the halo finding workload across multiple compute nodes, enabling analysis of much larger datasets than is possible with the strictly serial or previous parallel implementations of HOP. We provide a reference implementation of this method as a part of the toolkit yt, an analysis toolkit for Adaptive Mesh Refinement (AMR) data that includes complementary analysis modules. Additionally, we discuss a suite of benchmarks that demonstrate that this method scales well up to several hundred tasks and datasets in excess of $2000^3$ particles. The Parallel HOP method and our implementation can be readily applied to any kind of N-body simulation data and is therefore widely applicable.
The median observed velocity width v_90 of low-ionization species in damped Ly-alpha systems is close to 90 km/s, with approximately 10% of all systems showing v_90 > 210 km/s at z=3. We show that a relative shortage of such high-velocity neutral gas absorbers in state-of-the-art galaxy formation models is a fundamental problem, present both in grid-based and particle-based numerical simulations. Using a series of numerical simulations of varying resolution and box size to cover a wide range of halo masses, we demonstrate that energy from gravitational infall alone is insufficient to produce the velocity dispersion observed in damped Ly-alpha systems, nor does this dispersion arise from an implementation of star formation and feedback in our highest resolution (~ 45 pc) models, if we do not put any galactic winds into our models by hand. We argue that these numerical experiments highlight the need to separate dynamics of different components of the multiphase interstellar medium at z=3.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا