Do you want to publish a course? Click here

The full Fisher matrix for galaxy surveys

241   0   0.0 ( 0 )
 Added by Luis Raul Abramo
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

Starting from the Fisher matrix for counts in cells, I derive the full Fisher matrix for surveys of multiple tracers of large-scale structure. The key assumption is that the inverse of the covariance of the galaxy counts is given by the naive matrix inverse of the covariance in a mixed position-space and Fourier-space basis. I then compute the Fisher matrix for the power spectrum in bins of the three-dimensional wavenumber k; the Fisher matrix for functions of position x (or redshift z) such as the linear bias of the tracers and/or the growth function; and the cross-terms of the Fisher matrix that expresses the correlations between estimations of the power spectrum and estimations of the bias. When the bias and growth function are fully specified, and the Fourier-space bins are large enough that the covariance between them can be neglected, the Fisher matrix for the power spectrum reduces to the widely used result that was first derived by Feldman, Kaiser and Peacock (1994). Assuming isotropy, an exact calculation of the Fisher matrix can be performed in the case of a constant-density, volume-limited survey. I then show how the exact Fisher matrix in the general case can be obtained in terms of a series of volume-limited surveys.



rate research

Read More

Fisher forecasts are a common tool in cosmology with applications ranging from survey planning to the development of new cosmological probes. While frequently adopted, they are subject to numerical instabilities that need to be carefully investigated to ensure accurate and reproducible results. This research note discusses these challenges using the example of a weak lensing data vector and proposes procedures that can help in their solution.
In recent years forecasting activities have become a very important tool for designing and optimising large scale structure surveys. To predict the performance of such surveys, the Fisher matrix formalism is frequently used as a fast and easy way to compute constraints on cosmological parameters. Among them lies the study of the properties of dark energy which is one of the main goals in modern cosmology. As so, a metric for the power of a survey to constrain dark energy is provided by the Figure of merit (FoM). This is defined as the inverse of the surface contour given by the joint variance of the dark energy equation of state parameters ${w_0,w_a}$ in the Chevallier-Polarski-Linder parameterisation, which can be evaluated from the covariance matrix of the parameters. This covariance matrix is obtained as the inverse of the Fisher matrix. Inversion of an ill-conditioned matrix can result in large errors on the covariance coefficients if the elements of the Fisher matrix have been estimated with insufficient precision. The conditioning number is a metric providing a mathematical lower limit to the required precision for a reliable inversion, but it is often too stringent in practice for Fisher matrices with size larger than $2times2$. In this paper we propose a general numerical method to guarantee a certain precision on the inferred constraints, like the FoM. It consists on randomly vibrating (perturbing) the Fisher matrix elements with Gaussian perturbations of a given amplitude, and then evaluating the maximum amplitude that keeps the FoM within the chosen precision. The steps used in the numerical derivatives and integrals involved in the calculation of the Fisher matrix elements can then be chosen accordingly in order to keep the precision of the Fisher matrix elements below this maximum amplitude...
In a Bayesian context, theoretical parameters are correlated random variables. Then, the constraints on one parameter can be improved by either measuring this parameter more precisely - or by measuring the other parameters more precisely. Especially in the case of many parameters, a lengthy process of guesswork is then needed to determine the most efficient way to improve one parameters constraints. In this short article, we highlight an extremely simple analytical expression that replaces the guesswork and that facilitates a deeper understanding of optimization with interdependent parameters.
Host galaxy identification is a crucial step for modern supernova (SN) surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST), which will discover SNe by the thousands. Spectroscopic resources are limited, so in the absence of real-time SN spectra these surveys must rely on host galaxy spectra to obtain accurate redshifts for the Hubble diagram and to improve photometric classification of SNe. In addition, SN luminosities are known to correlate with host-galaxy properties. Therefore, reliable identification of host galaxies is essential for cosmology and SN science. We simulate SN events and their locations within their host galaxies to develop and test methods for matching SNe to their hosts. We use both real and simulated galaxy catalog data from the Advanced Camera for Surveys General Catalog and MICECATv2.0, respectively. We also incorporate hostless SNe residing in undetected faint hosts into our analysis, with an assumed hostless rate of 5%. Our fully automated algorithm is run on catalog data and matches SNe to their hosts with 91% accuracy. We find that including a machine learning component, run after the initial matching algorithm, improves the accuracy (purity) of the matching to 97% with a 2% cost in efficiency (true positive rate). Although the exact results are dependent on the details of the survey and the galaxy catalogs used, the method of identifying host galaxies we outline here can be applied to any transient survey.
We show how to obtain constraints on $beta=f/b$, the ratio of the matter growth rate and the bias that quantifies the linear redshift-space distortions, that are independent of the cosmological model, using multiple tracers of large-scale structure. For a single tracer the uncertainties on $beta$ are constrained by the uncertainties in the amplitude and shape of the power spectrum, which is limited by cosmic variance. However, for two or more tracers this limit does not apply, since taking the ratio of power spectra cosmic variance cancels out, and in the linear (Kaiser) approximation one measures directly the quantity $(1+ beta_1 mu^2)^2/(1+ beta_2 mu^2)^2$, where $mu$ is the angle of a given mode with the line of sight. We provide analytic formulae for the Fisher matrix for one and two tracers, and quantify the signal-to-noise ratio needed to make effective use of the multiple-tracer technique. We also forecast the errors on $beta$ for a survey like Euclid.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا