ترغب بنشر مسار تعليمي؟ اضغط هنا

Fuzzy Supernova Templates II: Parameter Estimation

39   0   0.0 ( 0 )
 نشر من قبل Steven Rodney
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Wide field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper (Rodney and Tonry, 2009) we introduced the SOFT method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the SDSS and SNLS surveys as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing a root-mean-square scatter in the residuals of RMS_z=0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored Lambda-CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the RMS scatter in Hubble diagram residuals is 0.18 mags for the SDSS data and 0.28 mags for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mags for the combined SDSS and SNLS data set. Using Monte Carlo simulations we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.

قيم البحث

اقرأ أيضاً

228 - J. Mosher , J. Guy , R. Kessler 2014
We use simulated SN Ia samples, including both photometry and spectra, to perform the first direct validation of cosmology analysis using the SALT-II light curve model. This validation includes residuals from the light curve training process, systema tic biases in SN Ia distance measurements, and the bias on the dark energy equation of state parameter w. Using the SN-analysis package SNANA, we simulate and analyze realistic samples corresponding to the data samples used in the SNLS3 analysis: 120 low-redshift (z < 0.1) SNe Ia, 255 SDSS SNe Ia (z < 0.4), and 290 SNLS SNe Ia (z <= 1). To probe systematic uncertainties in detail, we vary the input spectral model, the model of intrinsic scatter, and the smoothing (i.e., regularization) parameters used during the SALT-II model training. Using realistic intrinsic scatter models results in a slight bias in the ultraviolet portion of the trained SALT-II model, and w biases (winput - wrecovered) ranging from -0.005 +/- 0.012 to -0.024 +/- 0.010. These biases are indistinguishable from each other within uncertainty; the average bias on w is -0.014 +/- 0.007.
By using the Sloan Digital Sky Survey (SDSS) first year type Ia supernova (SN Ia) compilation, we compare two different approaches (traditional chi^2 and complete likelihood) to determine parameter constraints when the magnitude dispersion is to be e stimated as well. We consider cosmological constant + Cold Dark Matter (Lambda CDM) and spatially flat, constant w Dark Energy + Cold Dark Matter (FwCDM) cosmological models and show that, for current data, there is a small difference in the best fit values and $sim$ 30% difference in confidence contour areas in case the MLCS2k2 light-curve fitter is adopted. For the SALT2 light-curve fitter the differences are less significant ($lesssim$ 13% difference in areas). In both cases the likelihood approach gives more restrictive constraints. We argue for the importance of using the complete likelihood instead of the chi^2 approach when dealing with parameters in the expression for the variance.
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic u ncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
Cosmological large-scale structure analyses based on two-point correlation functions often assume a Gaussian likelihood function with a fixed covariance matrix. We study the impact on cosmological parameter estimation of ignoring the parameter depend ence of this covariance matrix, focusing on the particular case of joint weak-lensing and galaxy clustering analyses. Using a Fisher matrix formalism (calibrated against exact likelihood evaluation in particular simple cases), we quantify the effect of using a parameter dependent covariance matrix on both the bias and variance of the parameters. We confirm that the approximation of a parameter-independent covariance matrix is exceptionally good in all realistic scenarios. The information content in the covariance matrix (in comparison with the two point functions themselves) does not change with the fractional sky coverage. Therefore the increase in information due to the parameter dependent covariance matrix becomes negligible as the number of modes increases. Even for surveys covering less than $1%$ of the sky, this effect only causes a bias of up to ${cal O}(10%)$ of the statistical uncertainties, with a misestimation of the parameter uncertainties at the same level or lower. The effect will only be smaller with future large-area surveys. Thus for most analyses the effect of a parameter-dependent covariance matrix can be ignored both in terms of the accuracy and precision of the recovered cosmological constraints.
The ability to obtain reliable point estimates of model parameters is of crucial importance in many fields of physics. This is often a difficult task given that the observed data can have a very high number of dimensions. In order to address this pro blem, we propose a novel approach to construct parameter estimators with a quantifiable bias using an order expansion of highly compressed deep summary statistics of the observed data. These summary statistics are learned automatically using an information maximising loss. Given an observation, we further show how one can use the constructed estimators to obtain approximate Bayes computation (ABC) posterior estimates and their corresponding uncertainties that can be used for parameter inference using Gaussian process regression even if the likelihood is not tractable. We validate our method with an application to the problem of cosmological parameter inference of weak lensing mass maps. We show in that case that the constructed estimators are unbiased and have an almost optimal variance, while the posterior distribution obtained with the Gaussian process regression is close to the true posterior and performs better or equally well than comparable methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا