Do you want to publish a course? Click here

BigBOSS: The Ground-Based Stage IV Dark Energy Experiment

152   0   0.0 ( 0 )
 Added by David Schlegel
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

The BigBOSS experiment is a proposed DOE-NSF Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with an all-sky galaxy redshift survey. The project is designed to unlock the mystery of dark energy using existing ground-based facilities operated by NOAO. A new 4000-fiber R=5000 spectrograph covering a 3-degree diameter field will measure BAO and redshift space distortions in the distribution of galaxies and hydrogen gas spanning redshifts from 0.2<z<3.5. The Dark Energy Task Force figure of merit (DETF FoM) for this experiment is expected to be equal to that of a JDEM mission for BAO with the lower risk and cost typical of a ground-based experiment. This project will enable an unprecedented multi-object spectroscopic capability for the U.S. community through an existing NOAO facility. The U.S. community would have access directly to this instrument/telescope combination, as well as access to the legacy archives that will be created by the BAO key project.



rate research

Read More

BigBOSS is a Stage IV ground-based dark energy experiment to study baryon acoustic oscillations (BAO) and the growth of structure with a wide-area galaxy and quasar redshift survey over 14,000 square degrees. It has been conditionally accepted by NOAO in response to a call for major new instrumentation and a high-impact science program for the 4-m Mayall telescope at Kitt Peak. The BigBOSS instrument is a robotically-actuated, fiber-fed spectrograph capable of taking 5000 simultaneous spectra over a wavelength range from 340 nm to 1060 nm, with a resolution R = 3000-4800. Using data from imaging surveys that are already underway, spectroscopic targets are selected that trace the underlying dark matter distribution. In particular, targets include luminous red galaxies (LRGs) up to z = 1.0, extending the BOSS LRG survey in both redshift and survey area. To probe the universe out to even higher redshift, BigBOSS will target bright [OII] emission line galaxies (ELGs) up to z = 1.7. In total, 20 million galaxy redshifts are obtained to measure the BAO feature, trace the matter power spectrum at smaller scales, and detect redshift space distortions. BigBOSS will provide additional constraints on early dark energy and on the curvature of the universe by measuring the Ly-alpha forest in the spectra of over 600,000 2.2 < z < 3.5 quasars. BigBOSS galaxy BAO measurements combined with an analysis of the broadband power, including the Ly-alpha forest in BigBOSS quasar spectra, achieves a FOM of 395 with Planck plus Stage III priors. This FOM is based on conservative assumptions for the analysis of broad band power (kmax = 0.15), and could grow to over 600 if current work allows us to push the analysis to higher wave numbers (kmax = 0.3). BigBOSS will also place constraints on theories of modified gravity and inflation, and will measure the sum of neutrino masses to 0.024 eV accuracy.
This white paper envisions a revolutionary post-DESI, post-LSST dark energy program based on intensity mapping of the redshifted 21cm emission line from neutral hydrogen at radio frequencies. The proposed intensity mapping survey has the unique capability to quadruple the volume of the Universe surveyed by optical programs, provide a percent-level measurement of the expansion history to $z sim 6$, open a window to explore physics beyond the concordance $Lambda$CDM model, and to significantly improve the precision on standard cosmological parameters. In addition, characterization of dark energy and new physics will be powerfully enhanced by cross-correlations with optical surveys and cosmic microwave background measurements. The rich dataset obtained by the proposed intensity mapping instrument will be simultaneously useful in exploring the time-domain physics of fast radio transients and pulsars, potentially in live multi-messenger coincidence with other observatories. The core dark energy/inflation science advances enabled by this program are the following: (i) Measure the expansion history of the universe over $z=0.3-6$ with a single instrument, extending the range deep into the pre-acceleration era, providing an unexplored window for new physics; (ii) Measure the growth rate of structure in the universe over the same redshift range; (iii) Observe, or constrain, the presence of inflationary relics in the primordial power spectrum, improving existing constraints by an order of magnitude; (iv) Observe, or constrain, primordial non-Gaussianity with unprecedented precision, improving constraints on several key numbers by an order of magnitude. Detailed mapping of the enormous, and still largely unexplored, volume of cosmic space will thus provide unprecedented information on fundamental questions of the vacuum energy and early-universe physics.
In recent years forecasting activities have become a very important tool for designing and optimising large scale structure surveys. To predict the performance of such surveys, the Fisher matrix formalism is frequently used as a fast and easy way to compute constraints on cosmological parameters. Among them lies the study of the properties of dark energy which is one of the main goals in modern cosmology. As so, a metric for the power of a survey to constrain dark energy is provided by the Figure of merit (FoM). This is defined as the inverse of the surface contour given by the joint variance of the dark energy equation of state parameters ${w_0,w_a}$ in the Chevallier-Polarski-Linder parameterisation, which can be evaluated from the covariance matrix of the parameters. This covariance matrix is obtained as the inverse of the Fisher matrix. Inversion of an ill-conditioned matrix can result in large errors on the covariance coefficients if the elements of the Fisher matrix have been estimated with insufficient precision. The conditioning number is a metric providing a mathematical lower limit to the required precision for a reliable inversion, but it is often too stringent in practice for Fisher matrices with size larger than $2times2$. In this paper we propose a general numerical method to guarantee a certain precision on the inferred constraints, like the FoM. It consists on randomly vibrating (perturbing) the Fisher matrix elements with Gaussian perturbations of a given amplitude, and then evaluating the maximum amplitude that keeps the FoM within the chosen precision. The steps used in the numerical derivatives and integrals involved in the calculation of the Fisher matrix elements can then be chosen accordingly in order to keep the precision of the Fisher matrix elements below this maximum amplitude...
Stage IV weak lensing experiments will offer more than an order of magnitude leap in precision. We must therefore ensure that our analyses remain accurate in this new era. Accordingly, previously ignored systematic effects must be addressed. In this work, we evaluate the impact of the reduced shear approximation and magnification bias, on the information obtained from the angular power spectrum. To first-order, the statistics of reduced shear, a combination of shear and convergence, are taken to be equal to those of shear. However, this approximation can induce a bias in the cosmological parameters that can no longer be neglected. A separate bias arises from the statistics of shear being altered by the preferential selection of galaxies and the dilution of their surface densities, in high-magnification regions. The corrections for these systematic effects take similar forms, allowing them to be treated together. We calculated the impact of neglecting these effects on the cosmological parameters that would be determined from Euclid, using cosmic shear tomography. To do so, we employed the Fisher matrix formalism, and included the impact of the super-sample covariance. We also demonstrate how the reduced shear correction can be calculated using a lognormal field forward modelling approach. These effects cause significant biases in Omega_m, sigma_8, n_s, Omega_DE, w_0, and w_a of -0.53 sigma, 0.43 sigma, -0.34 sigma, 1.36 sigma, -0.68 sigma, and 1.21 sigma, respectively. We then show that these lensing biases interact with another systematic: the intrinsic alignment of galaxies. Accordingly, we develop the formalism for an intrinsic alignment-enhanced lensing bias correction. Applying this to Euclid, we find that the additional terms introduced by this correction are sub-dominant.
The advent of Stage IV weak lensing surveys will open up a new era in precision cosmology. These experiments will offer more than an order-of-magnitude leap in precision over existing surveys, and we must ensure that the accuracy of our theory matches this. Accordingly, it is necessary to explicitly evaluate the impact of the theoretical assumptions made in current analyses on upcoming surveys. One effect typically neglected in present analyses is the Doppler-shift of the measured source comoving distances. Using Fisher matrices, we calculate the biases on the cosmological parameter values inferred from a Euclid-like survey, if the correction for this Doppler-shift is omitted. We find that this Doppler-shift can be safely neglected for Stage IV surveys. The code used in this investigation is made publicly available.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا