No Arabic abstract
Type Ia Supernovae (SNe Ia) are widely used to measure the expansion of the Universe. Improving distance measurements of SNe Ia is one technique to better constrain the acceleration of expansion and determine its physical nature. This document develops a new SNe Ia spectral energy distribution (SED) model, called the SUpernova Generator And Reconstructor (SUGAR), which improves the spectral description of SNe Ia, and consequently could improve the distance measurements. This model is constructed from SNe Ia spectral properties and spectrophotometric data from The Nearby Supernova Factory collaboration. In a first step, a PCA-like method is used on spectral features measured at maximum light, which allows us to extract the intrinsic properties of SNe Ia. Next, the intrinsic properties are used to extract the average extinction curve. Third, an interpolation using Gaussian Processes facilitates using data taken at different epochs during the lifetime of a SN Ia and then projecting the data on a fixed time grid. Finally, the three steps are combined to build the SED model as a function of time and wavelength. This is the SUGAR model. The main advancement in SUGAR is the addition of two additional parameters to characterize SNe Ia variability. The first is tied to the properties of SNe Ia ejecta velocity, the second is correlated with their calcium lines. The addition of these parameters, as well as the high quality the Nearby Supernova Factory data, makes SUGAR an accurate and efficient model for describing the spectra of normal SNe Ia as they brighten and fade. The performance of this model makes it an excellent SED model for experiments like ZTF, LSST or WFIRST.
Type Ia supernova cosmology depends on the ability to fit and standardize observations of supernova magnitudes with an empirical model. We present here a series of new models of Type Ia Supernova spectral time series that capture a greater amount of supernova diversity than possible with the models that are currently customary. These are entitled SuperNova Empirical MOdels (textsc{SNEMO}footnote{https://snfactory.lbl.gov/snemo}). The models are constructed using spectrophotometric time series from $172$ individual supernovae from the Nearby Supernova Factory, comprising more than $2000$ spectra. Using the available observations, Gaussian Processes are used to predict a full spectral time series for each supernova. A matrix is constructed from the spectral time series of all the supernovae, and Expectation Maximization Factor Analysis is used to calculate the principal components of the data. K-fold cross-validation then determines the selection of model parameters and accounts for color variation in the data. Based on this process, the final models are trained on supernovae that have been dereddened using the Fitzpatrick and Massa extinction relation. Three final models are presented here: textsc{SNEMO2}, a two-component model for comparison with current Type~Ia models; textsc{SNEMO7}, a seven component model chosen for standardizing supernova magnitudes which results in a total dispersion of $0.100$~mag for a validation set of supernovae, of which $0.087$~mag is unexplained (a total dispersion of $0.113$~mag with unexplained dispersion of $0.097$~mag is found for the total set of training and validation supernovae); and textsc{SNEMO15}, a comprehensive $15$ component model that maximizes the amount of spectral time series behavior captured.
We study the observables of 158 relatively normal Type Ia supernovae (SNe Ia) by dividing them into two groups in terms of the expansion velocity inferred from the absorption minimum of the Si II 6355 line in their spectra near B-band maximum brightness. One group (Normal) consists of normal SNe Ia populating a narrow strip in the Si II velocity distribution, with an average expansion velocity v=10,600+/-400 km/s near B maximum; the other group (HV) consists of objects with higher velocities, v > 11,800 km/s. Compared with the Normal group, the HV one shows a narrower distribution in both the peak luminosity and the luminosity decline rate dm_{15}. In particular, their B-V colors at maximum brightness are found to be on average redder by ~0.1, suggesting that they either are associated with dusty environments or have intrinsically red B-V colors. The HV SNe Ia are also found to prefer a lower extinction ratio Rv~1.6 (versus ~2.4 for the Normal ones). Applying such an absorption-correction dichotomy to SNe Ia of these two groups remarkably reduces the dispersion in their peak luminosity from 0.178 mag to only 0.125 mag.
A spectral-energy distribution (SED) model for Type Ia supernovae (SNe Ia) is a critical tool for measuring precise and accurate distances across a large redshift range and constraining cosmological parameters. We present an improved model framework, SALT3, which has several advantages over current models including the leading SALT2 model (SALT2.4). While SALT3 has a similar philosophy, it differs from SALT2 by having improved estimation of uncertainties, better separation of color and light-curve stretch, and a publicly available training code. We present the application of our training method on a cross-calibrated compilation of 1083 SNe with 1207 spectra. Our compilation is $2.5times$ larger than the SALT2 training sample and has greatly reduced calibration uncertainties. The resulting trained SALT3.K21 model has an extended wavelength range $2000$-$11000$ angstroms (1800 angstroms redder) and reduced uncertainties compared to SALT2, enabling accurate use of low-$z$ $I$ and $iz$ photometric bands. Including these previously discarded bands, SALT3.K21 reduces the Hubble scatter of the low-z Foundation and CfA3 samples by 15% and 10%, respectively. To check for potential systematic uncertainties we compare distances of low ($0.01<z<0.2$) and high ($0.4<z<0.6$) redshift SNe in the training compilation, finding an insignificant $2pm14$ mmag shift between SALT2.4 and SALT3.K21. While the SALT3.K21 model was trained on optical data, our method can be used to build a model for rest-frame NIR samples from the Roman Space Telescope. Our open-source training code, public training data, model, and documentation are available at https://saltshaker.readthedocs.io/en/latest/, and the model is integrated into the sncosmo and SNANA software packages.
Aims: Spectroscopic observations of Type Ia supernovae obtained at the New Technology Telescope (NTT) and the Nordic Optical Telescope (NOT), in conjunction with the SDSS-II Supernova Survey, are analysed. We use spectral indicators measured up to a month after the lightcurve peak luminosity to characterise the supernova properties, and examine these for potential correlations with host galaxy type, lightcurve shape, colour excess, and redshift. Methods: Our analysis is based on 89 Type Ia supernovae at a redshift interval z = 0.05 - 0.3, for which multiband SDSS photometry is available. A lower-z spectroscopy reference sample was used for comparisons over cosmic time. We present measurements of time series of pseudo equivalent widths and line velocities of the main spectral features in Type Ia supernovae. Results: Supernovae with shallower features are found predominantly among the intrinsically brighter slow declining supernovae. We detect the strongest correlation between lightcurve stretch and the Si ii 4000 absorption feature, which also correlates with the estimated mass and star formation rate of the host galaxy. We also report a tentative correlation between colour excess and spectral properties. If confirmed, this would suggest that moderate reddening of Type Ia supernovae is dominated by effects in the explosion or its immediate environment, as opposed to extinction by interstellar dust.
Improving the use of Type Ia supernovae (SNIa) as standard candles requires a better approach to incorporate the relationship between SNIa and the properties of their host galaxies. Using a spectroscopically-confirmed sample of $sim$1600 SNIa, we develop the first empirical model of underlying populations for SNIa light-curve properties that includes their dependence on host-galaxy stellar mass. These populations are important inputs to simulations that are used to model selection effects and correct distance biases within the BEAMS with Bias Correction (BBC) framework. Here we improve BBC to also account for SNIa-host correlations, and we validate this technique on simulated data samples. We recover the input relationship between SNIa luminosity and host-galaxy stellar mass (the mass step, $gamma$) to within 0.004 mags, which is a factor of 5 improvement over the previous method that results in a $gamma$-bias of ${sim}0.02$. We adapt BBC for a novel dust-based model of intrinsic brightness variations, which results in a greatly reduced mass step for data ($gamma = 0.017 pm 0.008$), and for simulations ($gamma =0.006 pm 0.007$). Analysing simulated SNIa, the biases on the dark energy equation-of-state, $w$, vary from $Delta w = 0.006(5)$ to $0.010(5)$ with our new BBC method; these biases are significantly smaller than the $0.02(5)$ $w$-bias using previous BBC methods that ignore SNIa-host correlations.