ترغب بنشر مسار تعليمي؟ اضغط هنا

HISS, a new tool for H I stacking: application to NIBLES spectra

281   0   0.0 ( 0 )
 نشر من قبل Julia Healy
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

H I stacking has proven to be a highly effective tool to statistically analyse average H I properties for samples of galaxies which may or may not be directly detected. With the plethora of H I data expected from the various upcoming H I surveys with the SKA Precursor and Pathfinder telescopes, it will be helpful to standardize the way in which stacking analyses are conducted. In this work we present a new PYTHON-based package, HISS, designed to stack H I (emission and absorption) spectra in a consistent and reliable manner. As an example, we use HISS to study the H I content in various galaxy sub-samples from the NIBLES survey of SDSS galaxies which were selected to represent their entire range in total stellar mass without a prior colour selection. This allowed us to compare the galaxy colour to average H I content in both detected and non-detected galaxies. Our sample, with a stellar mass range of 10^8 lt {{ M}}_star (M_odot) lt 10^{12}, has enabled us to probe the H I-to-stellar mass gas fraction relationship more than half an order of magnitude lower than in previous stacking studies.


قيم البحث

اقرأ أيضاً

In this paper we introduce a method for stacking data cubelets extracted from interferometric surveys of galaxies in the redshifted 21-cm H,textsc{i} line. Unlike the traditional spectral stacking technique, which stacks one-dimensional spectra extra cted from data cubes, we examine a method based on image domain stacks which makes deconvolution possible. To test the validity of this assumption, we mock a sample of 3622 equatorial galaxies extracted from the GAMA survey, recently imaged as part of a DINGO-VLA project. We first examine the accuracy of the method using a noise-free simulation and note that the stacked image and flux estimation are dramatically improved compared to traditional stacking. The extracted H,textsc{i} mass from the deconvolved image agrees with the average input mass to within 3%. However, with traditional spectral stacking, the derived H,textsc{i} is incorrect by greater than a factor of 2. For a more realistic case of a stack with finite S/N, we also produced 20 different noise realisations to closely mimic the properties of the DINGO-VLA interferometric survey. We recovered the predicted average H,textsc{i} mass to within $sim$4%. Compared with traditional spectral stacking, this technique extends the range of science applications where stacking can be used, and is especially useful for characterizing the emission from extended sources with interferometers.
Hydrogen and helium emission lines in nebulae form by radiative recombination. This is a simple process which, in principle, can be described to very high precision. Ratios of He I and H I emission lines can be used to measure the He$^+$/H$^+$ abunda nce ratio to the same precision as the recombination rate coefficients. This paper investigates the controversy over the correct theory to describe dipole $l$-changing collisions ($nlrightarrow nl=lpm 1$) between energy-degenerate states within an $n$-shell. The work of Pengelly & Seaton (1964) has, for half-a-century, been considered the definitive study which solved the problem. Recent work by Vrinceanu et al.(2012) recommended the use of rate coefficients from a semi-classical approximation which are nearly an order of magnitude smaller than those of Pengelly & Seaton (1964), with the result that significantly higher densities are needed for the $nl$ populations to come into local thermodynamic equilibrium. Here, we compare predicted H~I emissivities from the two works and find widespread differences, of up to $approx 10$%. This far exceeds the 1% precision required to obtain the primordial He/H abundance ratio from observations so as to constrain Big Bang cosmologies. We recommend using the rate coefficients of Pengelly & Seaton (1964) for $l$-changing collisions, to describe the H recombination spectrum, based-on their quantum mechanical representation of the long-range dipole interaction.
Aims. To present the new novel statistical clustering tool INDICATE which assesses and quantifies the degree of spatial clustering of each object in a dataset, discuss its applications as a tracer of morphological stellar features in star forming reg ions, and to look for these features in the Carina Nebula (NGC 3372). Results. We successfully recover known stellar structure of the Carina Nebula, including the 5 young star clusters in this region. Four sub-clusters contain no, or very few, stars with a degree of association above random which suggests they may be fluctuations in the field rather than real clusters. In addition we find: (1) Stars in the NW and SE regions have significantly different clustering tendencies, which is reflective of differences in the apparent star formation activity in these regions. Further study is required to ascertain the physical origin of the difference; (2) The different clustering properties between these two regions are even more pronounced for OB stars; (3) There are no signatures of classical mass segregation present in the SE region - massive stars here are not spatially concentrated together above random; (4) Stellar concentrations are more frequent around massive stars than typical for the general population, particularly in the Tr14 cluster; (5) There is a relation between the concentration of OB stars and the concentration of (lower mass) stars around OB stars in the centrally concentrated Tr14 and Tr15, but no such relation exists in Tr16. We conclude this is due to the highly sub-structured nature of Tr16. Conclusions. INDICATE is a powerful new tool employing a novel approach to quantify the clustering tendencies of individual objects in a dataset within a user-defined parameter space. As such it can be used in a wide array of data analysis applications.
256 - A. Smette , H. Sana , S. Noll 2015
Context: The interaction of the light from astronomical objects with the constituents of the Earths atmosphere leads to the formation of telluric absorption lines in ground-based collected spectra. Correcting for these lines, mostly affecting the red and infrared region of the spectrum, usually relies on observations of specific stars obtained close in time and airmass to the science targets, therefore using precious observing time. Aims: We present molecfit, a tool for correcting for telluric absorption lines based on synthetic modelling of the Earths atmospheric transmission. Molecfit is versatile and can be used with data obtained with various ground-based telescopes and instruments. Methods: Molecfit combines a publicly available radiative transfer code, a molecular line database, atmospheric profiles, and various kernels to model the instrument line spread function. The atmospheric profiles are created by merging a standard atmospheric profile representative of a given observatorys climate, of local meteorological data, and of dynamically retrieved altitude profiles for temperature, pressure, and humidity. We discuss the various ingredients of the method, its applicability, and its limitations. We also show examples of telluric line correction on spectra obtained with a suite of ESO Very Large Telescope (VLT) instruments. Results: Compared to previous similar tools, molecfit takes the best results for temperature, pressure, and humidity in the atmosphere above the observatory into account. As a result, the standard deviation of the residuals after correction of unsaturated telluric lines is frequently better than 2% of the continuum. Conclusion: Molecfit is able to accurately model and correct for telluric lines over a broad range of wavelengths and spectral resolutions. (Abridged)
Spectra derived from fast Fourier transform (FFT) analysis of time-domain data intrinsically contain statistical fluctuations whose distribution depends on the number of accumulated spectra contributing to a measurement. The tail of this distribution , which is essential for separation of the true signal from the statistical fluctuations, deviates noticeably from the normal distribution for a finite number of the accumulations. In this paper we develop a theory to properly account for the statistical fluctuations when fitting a model to a given accumulated spectrum. The method is implemented in software for the purpose of automatically fitting a large body of such FFT-derived spectra. We apply this tool to analyze a portion of a dense cluster of spikes recorded by our FST instrument during a record-breaking event that occurred on 06 Dec 2006. The outcome of this analysis is briefly discussed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا