Do you want to publish a course? Click here

The DiskMass Survey. II. Error Budget

160   0   0.0 ( 0 )
 Added by Matthew Bershady
 Publication date 2010
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface-density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio, and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find a disk inclination range of 25-35 degrees is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale-heights are significant, but can be estimated from radial scale-lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction (F_b) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially-extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error-budget for the key quantities: dynamical disk mass surface-density, disk stellar mass-to-light ratio, and disk maximality (V_disk / V_circular). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.



rate research

Read More

We present a survey of the mass surface-density of spiral disks, motivated by outstanding uncertainties in rotation-curve decompositions. Our method exploits integral-field spectroscopy to measure stellar and gas kinematics in nearly face-on galaxies sampled at 515, 660, and 860 nm, using the custom-built SparsePak and PPak instruments. A two-tiered sample, selected from the UGC, includes 146 nearly face-on galaxies, with B<14.7 and disk scale-lengths between 10 and 20 arcsec, for which we have obtained H-alpha velocity-fields; and a representative 46-galaxy subset for which we have obtained stellar velocities and velocity dispersions. Based on re-calibration of extant photometric and spectroscopic data, we show these galaxies span factors of 100 in L(K) (0.03 < L/L(K)* < 3), 8 in L(B)/L(K), 10 in R-band disk central surface-brightness, with distances between 15 and 200 Mpc. The survey is augmented by 4-70 micron Spitzer IRAC and MIPS photometry, ground-based UBVRIJHK photometry, and HI aperture-synthesis imaging. We outline the spectroscopic analysis protocol for deriving precise and accurate line-of-sight stellar velocity dispersions. Our key measurement is the dynamical disk-mass surface-density. Star-formation rates and kinematic and photometric regularity of galaxy disks are also central products of the study. The survey is designed to yield random and systematic errors small enough (i) to confirm or disprove the maximum-disk hypothesis for intermediate-type disk galaxies, (ii) to provide an absolute calibration of the stellar mass-to-light ratio well below uncertainties in present-day stellar-population synthesis models, and (iii) to make significant progress in defining the shape of dark halos in the inner regions of disk galaxies.
The accurate evaluation of the nuclear reaction rates and corresponding uncertainties is an essential requisite for a precise determination of light nuclide primordial abundances. The recent measurement of the D(p,gamma)3He radiative capture cross section by the LUNA collaboration, with its order 3% error, represents an important step in improving the theoretical prediction for Deuterium produced in the early universe. In view of this recent result, we present in this paper a full analysis of its abundance, which includes a new critical study of the impact of the other two main processes for Deuterium burning, namely the deuteron-deuteron transfer reactions, D(d,p)3H and D(d,n)3He. In particular, emphasis is given to the statistical method of analysis of experimental data, to a quantitative study of the theoretical uncertainties, and a comparison with similar studies presented in the recent literature. We then discuss the impact of our study on the concordance of the primordial nucleosynthesis stage with the Planck experiment results on the baryon density Omegab h2 and the effective number of neutrino parameter Neff, as function of the assumed value of the 4He mass fraction Yp. While after the LUNA results, the value of Deuterium is quite precisely fixed, and points to a value of the baryon density in excellent agreement with the Planck result, a combined analysis also including Helium leads to two possible scenarios with different predictions for Omegab h2 and Neff. We argue that new experimental results on the systematics and the determination of Yp would be of great importance in assessing the overall concordance of the standard cosmological model.
GRAVITY is a new generation beam combination instrument for the VLTI. Its goal is to achieve microarsecond astrometric accuracy between objects separated by a few arcsec. This $10^6$ accuracy on astrometric measurements is the most important challenge of the instrument, and careful error budget have been paramount during the technical design of the instrument. In this poster, we will focus on baselines induced errors, which is part of a larger error budget.
TMT has defined the accuracy to be achieved for both absolute and differential astrometry in its top-level requirements documents. Because of the complexities of different types of astrometric observations, these requirements cannot be used to specify system design parameters directly. The TMT astrometry working group therefore developed detailed astrometry error budgets for a variety of science cases. These error budgets detail how astrometric errors propagate through the calibration, observing and data reduction processes. The budgets need to be condensed into sets of specific requirements that can be used by each subsystem team for design purposes. We show how this flowdown from error budgets to design requirements is achieved for the case of TMTs first-light Infrared Imaging Spectrometer (IRIS) instrument.
(Abridged) Estimating the uncertainty on the matter power spectrum internally (i.e. directly from the data) is made challenging by the simple fact that galaxy surveys offer at most a few independent samples. In addition, surveys have non-trivial geometries, which make the interpretation of the observations even trickier, but the uncertainty can nevertheless be worked out within the Gaussian approximation. With the recent realization that Gaussian treatments of the power spectrum lead to biased error bars about the dilation of the baryonic acoustic oscillation scale, efforts are being directed towards developing non-Gaussian analyses, mainly from N-body simulations so far. Unfortunately, there is currently no way to tell how the non-Gaussian features observed in the simulations compare to those of the real Universe, and it is generally hard to tell at what level of accuracy the N-body simulations can model complicated non-linear effects such as mode coupling and galaxy bias. We propose in this paper a novel method that aims at measuring non-Gaussian error bars on the matter power spectrum directly from galaxy survey data. We utilize known symmetries of the 4-point function, Wiener filtering and principal component analysis to estimate the full covariance matrix from only four independent fields with minimal prior assumptions. With the noise filtering techniques and only four fields, we are able to recover the Fisher information obtained from a large N=200 sample to within 20 per cent, for k < 1.0 h/Mpc. Finally, we provide a prescription to extract a noise-filtered, non-Gaussian, covariance matrix from a handful of fields in the presence of a survey selection function.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا