ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Large-Scale Reionization Simulations

48   0   0.0 ( 0 )
 نشر من قبل Rajat Thomas
 تاريخ النشر 2008
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present an efficient method to generate large simulations of the Epoch of Reionization (EoR) without the need for a full 3-dimensional radiative transfer code. Large dark-matter-only simulations are post-processed to produce maps of the redshifted 21cm emission from neutral hydrogen. Dark matter haloes are embedded with sources of radiation whose properties are either based on semi-analytical prescriptions or derived from hydrodynamical simulations. These sources could either be stars or power-law sources with varying spectral indices. Assuming spherical symmetry, ionized bubbles are created around these sources, whose radial ionized fraction and temperature profiles are derived from a catalogue of 1-D radiative transfer experiments. In case of overlap of these spheres, photons are conserved by redistributing them around the connected ionized regions corresponding to the spheres. The efficiency with which these maps are created allows us to span the large parameter space typically encountered in reionization simulations. We compare our results with other, more accurate, 3-D radiative transfer simulations and find excellent agreement for the redshifts and the spatial scales of interest to upcoming 21cm experiments. We generate a contiguous observational cube spanning redshift 6 to 12 and use these simulations to study the differences in the reionization histories between stars and quasars. Finally, the signal is convolved with the LOFAR beam response and its effects are analyzed and quantified. Statistics performed on this mock data set shed light on possible observational strategies for LOFAR.

قيم البحث

اقرأ أيضاً

116 - M. G. Santos 2009
While limited to low spatial resolution, the next generation low-frequency radio interferometers that target 21 cm observations during the era of reionization and prior will have instantaneous fields-of-view that are many tens of square degrees on th e sky. Predictions related to various statistical measurements of the 21 cm brightness temperature must then be pursued with numerical simulations of reionization with correspondingly large volume box sizes, of order 1000 Mpc on one side. We pursue a semi-numerical scheme to simulate the 21 cm signal during and prior to Reionization by extending a hybrid approach where simulations are performed by first laying down the linear dark matter density field, accounting for the non-linear evolution of the density field based on second-order linear perturbation theory as specified by the Zeldovich approximation, and then specifying the location and mass of collapsed dark matter halos using the excursion-set formalism. The location of ionizing sources and the time evolving distribution of ionization field is also specified using an excursion-set algorithm. We account for the brightness temperature evolution through the coupling between spin and gas temperature due to collisions, radiative coupling in the presence of Lyman-alpha photons and heating of the intergalactic medium, such as due to a background of X-ray photons. The hybrid simulation method we present is capable of producing the required large volume simulations with adequate resolution in a reasonable time so a large number of realizations can be obtained with variations in assumptions related to astrophysics and background cosmology that govern the 21 cm signal.
We present simulations of cosmic reionization and reheating from $z=18$ to $z=5$, investigating the role of stars (emitting soft UV-photons), nuclear black holes (BHs, with power-law spectra), X-ray binaries (XRBs, with hard X-ray dominated spectra), and the supernova-associated thermal bremsstrahlung of the diffuse interstellar medium (ISM, with soft X-ray spectra). We post-process the hydrodynamical simulation Massive-Black II (MBII) with multifrequency ionizing radiative transfer. The source properties are directly derived from the physical environment of MBII, and our only real free parameter is the ionizing escape fraction $f_{rm esc}$. We find that, among the models explored here, the one with an escape fraction that decreases with decreasing redshift yields results most in line with observations, such as of the neutral hydrogen fraction and the Thomson scattering optical depth. Stars are the main driver of hydrogen reionization and consequently of the thermal history of the intergalactic medium (IGM). We obtain $langle x_{rm HII} rangle = 0.99998$ at $z=6$ for all source types, with volume averaged temperatures $langle T rangle sim 20,000~{rm K}$. BHs are rare and negligible to hydrogen reionization, but conversely they are the only sources which can fully ionize helium, increasing local temperatures by $sim 10^4~{rm K}$. The thermal and ionization state of the neutral and lowly ionized hydrogen differs significantly with different source combinations, with ISM and (to a lesser extent) XRBs, playing a significant role and, as a consequence, determining the transition from absorption to emission of the 21 cm signal from neutral hydrogen.
The giant impact (GI) is one of the most important hypotheses both in planetary science and geoscience, since it is related to the origin of the Moon and also the initial condition of the Earth. A number of numerical simulations have been done using the smoothed particle hydrodynamics (SPH) method. However, GI hypothesis is currently in a crisis. The canonical GI scenario failed to explain the identical isotope ratio between the Earth and the Moon. On the other hand, little has been known about the reliability of the result of GI simulations. In this paper, we discuss the effect of the resolution on the results of the GI simulations by varying the number of particles from $3 times10^3$ to $10^8$. We found that the results does not converge, but shows oscillatory behaviour. We discuss the origin of this oscillatory behaviour.
To exploit the power of next-generation large-scale structure surveys, ensembles of numerical simulations are necessary to give accurate theoretical predictions of the statistics of observables. High-fidelity simulations come at a towering computatio nal cost. Therefore, approximate but fast simulations, surrogates, are widely used to gain speed at the price of introducing model error. We propose a general method that exploits the correlation between simulations and surrogates to compute fast, reduced-variance statistics of large-scale structure observables without model error at the cost of only a few simulations. We call this approach Convergence Acceleration by Regression and Pooling (CARPool). In numerical experiments with intentionally minimal tuning, we apply CARPool to a handful of GADGET-III $N$-body simulations paired with surrogates computed using COmoving Lagrangian Acceleration (COLA). We find $sim 100$-fold variance reduction even in the non-linear regime, up to $k_mathrm{max} approx 1.2$ $h {rm Mpc^{-1}}$ for the matter power spectrum. CARPool realises similar improvements for the matter bispectrum. In the nearly linear regime CARPool attains far larger sample variance reductions. By comparing to the 15,000 simulations from the Quijote suite, we verify that the CARPool estimates are unbiased, as guaranteed by construction, even though the surrogate misses the simulation truth by up to $60%$ at high $k$. Furthermore, even with a fully configuration-space statistic like the non-linear matter density probability density function, CARPool achieves unbiased variance reduction factors of up to $sim 10$, without any further tuning. Conversely, CARPool can be used to remove model error from ensembles of fast surrogates by combining them with a few high-accuracy simulations.
We propose a large-scale hologram calculation using WAvelet ShrinkAge-Based superpositIon (WASABI), a wavelet transform-based algorithm. An image-type hologram calculated using the WASABI method is printed on a glass substrate with the resolution of $65,536 times 65,536$ pixels and a pixel pitch of $1 mu$m. The hologram calculation time amounts to approximately 354 s on a commercial CPU, which is approximately 30 times faster than conventional methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا