ترغب بنشر مسار تعليمي؟ اضغط هنا

Reducing Zero-point Systematics in Dark Energy Supernova Experiments

103   0   0.0 ( 0 )
 نشر من قبل Lorenzo Faccioli
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the effect of filter zero-point uncertainties on future supernova dark energy missions. Fitting for calibration parameters using simultaneous analysis of all Type Ia supernova standard candles achieves a significant improvement over more traditional fit methods. This conclusion is robust under diverse experimental configurations (number of observed supernovae, maximum survey redshift, inclusion of additional systematics). This approach to supernova fitting considerably eases otherwise stringent mission calibration requirements. As an example we simulate a space-based mission based on the proposed JDEM satellite; however the method and conclusions are general and valid for any future supernova dark energy mission, ground or space-based.



قيم البحث

اقرأ أيضاً

129 - Kevin Cahill 2019
A quantum field theory has finite zero-point energy if the sum over all boson modes $b$ of the $n$th power of the boson mass $ m_b^n $ equals the sum over all fermion modes $f$ of the $n$th power of the fermion mass $ m_f^n $ for $n= 0$, 2, and 4. Th e zero-point energy of a theory that satisfies these three conditions with otherwise random masses is huge compared to the density of dark energy. But if in addition to satisfying these conditions, the sum of $m_b^4 log m_b/mu$ over all boson modes $b$ equals the sum of $ m_f^4 log m_f/mu $ over all fermion modes $f$, then the zero-point energy of the theory is zero. The value of the mass parameter $mu$ is irrelevant in view of the third condition ($n=4$). The particles of the standard model do not remotely obey any of these four conditions. But an inclusive theory that describes the particles of the standard model, the particles of dark matter, and all particles that have not yet been detected might satisfy all four conditions if pseudomasses are associated with the mean values in the vacuum of the divergences of the interactions of the inclusive model. Dark energy then would be the finite potential energy of the inclusive theory.
111 - Lu Yin 2020
We explored a new dark fluid model in the early universe to relieve the Hubble tension significantly, that tension is the different result of the Hubble constant derived from CMB inference and local measurements within the LambdaCDM. We discussed an exponential form of the equation of state in the acoustic dark energy for the first time, in which gravitational effects from its acoustic oscillations can impact the CMB phenomena at the epoch of matter radiation equivalent, called the exponential acoustic dark energy (eADE). And the constraints were given by the current cosmological data and the comparison of the phenomena with the stander model was shown through CMB and matter power spectra. The result shows our model has the $H_0 = 71.65^{+1.62}_{-4.4}$ in 68% C.L. with a smaller best fitted chi^2 value than that in LambdaCDM.
We implement a linear model for mitigating the effect of observing conditions and other sources of contamination in galaxy clustering analyses. Our treatment improves upon the fiducial systematics treatment of the Dark Energy Survey (DES) Year 1 (Y1) cosmology analysis in four crucial ways. Specifically, our treatment: 1) does not require decisions as to which observable systematics are significant and which are not, allowing for the possibility of multiple maps adding coherently to give rise to significant bias even if no single map leads to a significant bias by itself; 2) characterizes both the statistical and systematic uncertainty in our mitigation procedure, allowing us to propagate said uncertainties into the reported cosmological constraints; 3) explicitly exploits the full spatial structure of the galaxy density field to differentiate between cosmology-sourced and systematics-sourced fluctuations within the galaxy density field; 4) is fully automated, and can therefore be trivially applied to any data set. The updated correlation function for the DES Y1 redMaGiC catalog minimally impacts the cosmological posteriors from that analysis. Encouragingly, our analysis does improve the goodness of fit statistic of the DES Y1 3$times$2pt data set ($Delta chi^2 = -6.5$ with no additional parameters). This improvement is due in nearly equal parts to both the change in the correlation function and the added statistical and systematic uncertainties associated with our method. We expect the difference in mitigation techniques to become more important in future work as the size of cosmological data sets grows.
223 - M. Gatti , P. Vielzeuf , C. Davis 2017
We use numerical simulations to characterize the performance of a clustering-based method to calibrate photometric redshift biases. In particular, we cross-correlate the weak lensing (WL) source galaxies from the Dark Energy Survey Year 1 (DES Y1) sa mple with redMaGiC galaxies (luminous red galaxies with secure photometric redshifts) to estimate the redshift distribution of the former sample. The recovered redshift distributions are used to calibrate the photometric redshift bias of standard photo-$z$ methods applied to the same source galaxy sample. We apply the method to three photo-$z$ codes run in our simulated data: Bayesian Photometric Redshift (BPZ), Directional Neighborhood Fitting (DNF), and Random Forest-based photo-$z$ (RF). We characterize the systematic uncertainties of our calibration procedure, and find that these systematic uncertainties dominate our error budget. The dominant systematics are due to our assumption of unevolving bias and clustering across each redshift bin, and to differences between the shapes of the redshift distributions derived by clustering vs photo-$z$s. The systematic uncertainty in the mean redshift bias of the source galaxy sample is $Delta z lesssim 0.02$, though the precise value depends on the redshift bin under consideration. We discuss possible ways to mitigate the impact of our dominant systematics in future analyses.
102 - P. Astier , J. Guy , R. Pain 2010
We present a forecast of dark energy constraints that could be obtained from a large sample of distances to Type Ia supernovae detected and measured from space. We simulate the supernova events as they would be observed by a EUCLID-like telescope wit h its two imagers, assuming those would be equipped with 4 visible and 3 near infrared swappable filters. We account for known systematic uncertainties affecting the cosmological constraints, including those arising through the training of the supernova model used to fit the supernovae light curves. Using conservative assumptions and Planck priors, we find that a 18 month survey would yield constraints on the dark energy equation of state comparable to the cosmic shear approach in EUCLID: a variable two-parameter equation of state can be constrained to ~0.03 at z~0.3. These constraints are derived from distances to about 13,000 supernovae out to z=1.5, observed in two cones of 10 and 50 deg^2. These constraints do not require measuring a nearby supernova sample from the ground. Provided swappable filters can be accommodated on EUCLID, distances to supernovae can be measured from space and contribute to obtain the most precise constraints on dark energy properties.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا