ترغب بنشر مسار تعليمي؟ اضغط هنا

Cosmic Shear Systematics: Software-Hardware Balance

189   0   0.0 ( 0 )
 نشر من قبل Adam Amara
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Cosmic shear measurements rely on our ability to measure and correct the Point Spread Function (PSF) of the observations. This PSF is measured using stars in the field, which give a noisy measure at random points in the field. Using Wiener filtering, we show how errors in this PSF correction process propagate into shear power spectrum errors. This allows us to test future space-based missions, such as Euclid or JDEM, thereby allowing us to set clear engineering specifications on PSF variability. For ground-based surveys, where the variability of the PSF is dominated by the environment, we briefly discuss how our approach can also be used to study the potential of mitigation techniques such as correlating galaxy shapes in different exposures. To illustrate our approach we show that for a Euclid-like survey to be statistics limited, an initial pre-correction PSF ellipticity power spectrum, with a power-law slope of -3 must have an amplitude at l =1000 of less than 2 x 10^{-13}. This is 1500 times smaller than the typical lensing signal at this scale. We also find that the power spectrum of PSF size dR^2) at this scale must be below 2 x 10^{-12}. Public code available as part of iCosmo at http://www.icosmo.org



قيم البحث

اقرأ أيضاً

With the advent of large-scale weak lensing surveys there is a need to understand how realistic, scale-dependent systematics bias cosmic shear and dark energy measurements, and how they can be removed. Here we describe how spatial variations in the a mplitude and orientation of realistic image distortions convolve with the measured shear field, mixing the even-parity convergence and odd-parity modes, and bias the shear power spectrum. Many of these biases can be removed by calibration to external data, the survey itself, or by modelling in simulations. The uncertainty in the calibration must be marginalised over and we calculate how this propagates into parameter estimation, degrading the dark energy Figure-of-Merit. We find that noise-like biases affect dark energy measurements the most, while spikes in the bias power have the least impact, reflecting their correlation with the effect of cosmological parameters. We argue that in order to remove systematic biases in cosmic shear surveys and maintain statistical power effort should be put into improving the accuracy of the bias calibration rather than minimising the size of the bias. In general, this appears to be a weaker condition for bias removal. We also investigate how to minimise the size of the calibration set for a fixed reduction in the Figure-of-Merit. These results can be used to model the effect of biases and calibration on a cosmic shear survey accurately, assess their impact on the measurement of modified gravity and dark energy models, and to optimise surveys and calibration requirements.
We analyse three public cosmic shear surveys; the Kilo-Degree Survey (KiDS-450), the Dark Energy Survey (DES-SV) and the Canada France Hawaii Telescope Lensing Survey (CFHTLenS). Adopting the COSEBIs statistic to cleanly and completely separate the l ensing E-modes from the non-lensing B-modes, we detect B-modes in KiDS-450 and CFHTLenS at the level of about 2.7 $sigma$. For DES- SV we detect B-modes at the level of 2.8 $sigma$ in a non-tomographic analysis, increasing to a 5.5 $sigma$ B-mode detection in a tomographic analysis. In order to understand the origin of these detected B-modes we measure the B-mode signature of a range of different simulated systematics including PSF leakage, random but correlated PSF modelling errors, camera-based additive shear bias and photometric redshift selection bias. We show that any correlation between photometric-noise and the relative orientation of the galaxy to the point-spread-function leads to an ellipticity selection bias in tomographic analyses. This work therefore introduces a new systematic for future lensing surveys to consider. We find that the B-modes in DES-SV appear similar to a superposition of the B-mode signatures from all of the systematics simulated. The KiDS-450 and CFHTLenS B-mode measurements show features that are consistent with a repeating additive shear bias.
Hardware flaws are permanent and potent: hardware cannot be patched once fabricated, and any flaws may undermine any software executing on top. Consequently, verification time dominates implementation time. The gold standard in hardware Design Verifi cation (DV) is concentrated at two extremes: random dynamic verification and formal verification. Both struggle to root out the subtle flaws in complex hardware that often manifest as security vulnerabilities. The root problem with random verification is its undirected nature, making it inefficient, while formal verification is constrained by the state-space explosion problem, making it infeasible against complex designs. What is needed is a solution that is directed, yet under-constrained. Instead of making incremental improvements to existing DV approaches, we leverage the observation that existing software fuzzers already provide such a solution, and adapt them for hardware DV. Specifically, we translate RTL hardware to a software model and fuzz that model. The central challenge we address is how best to mitigate the differences between the hardware execution model and software execution model. This includes: 1) how to represent test cases, 2) what is the hardware equivalent of a crash, 3) what is an appropriate coverage metric, and 4) how to create a general-purpose fuzzing harness for hardware. To evaluate our approach, we fuzz four IP blocks from Googles OpenTitan SoC. Our experiments reveal a two orders-of-magnitude reduction in run time to achieve Finite State Machine (FSM) coverage over traditional dynamic verification schemes. Moreover, with our design-agnostic harness, we achieve over 88% HDL line coverage in three out of four of our designs -- even without any initial seeds.
Density-estimation likelihood-free inference (DELFI) has recently been proposed as an efficient method for simulation-based cosmological parameter inference. Compared to the standard likelihood-based Markov Chain Monte Carlo (MCMC) approach, DELFI ha s several advantages: it is highly parallelizable, there is no need to assume a possibly incorrect functional form for the likelihood and complicated effects (e.g the mask and detector systematics) are easier to handle with forward models. In light of this, we present two DELFI pipelines to perform weak lensing parameter inference with lognormal realizations of the tomographic shear field -- using the C_l summary statistic. The first pipeline accounts for the non-Gaussianities of the shear field, intrinsic alignments and photometric-redshift error. We validate that it is accurate enough for Stage III experiments and estimate that O(1000) simulations are needed to perform inference on Stage IV data. By comparing the second DELFI pipeline, which makes no assumption about the functional form of the likelihood, with the standard MCMC approach, which assumes a Gaussian likelihood, we test the impact of the Gaussian likelihood approximation in the MCMC analysis. We find it has a negligible impact on Stage IV parameter constraints. Our pipeline is a step towards seamlessly propagating all data-processing, instrumental, theoretical and astrophysical systematics through to the final parameter constraints.
Stripe 82 in the Sloan Digital Sky Survey was observed multiple times, allowing deeper images to be constructed by coadding the data. Here we analyze the ellipticities of background galaxies in this 275 square degree region, searching for evidence of distortions due to cosmic shear. The E-mode is detected in both real and Fourier space with $>5$-$sigma$ significance on degree scales, while the B-mode is consistent with zero as expected. The amplitude of the signal constrains the combination of the matter density $Omega_m$ and fluctuation amplitude $sigma_8$ to be $Omega_m^{0.7}sigma_8 = 0.252^{+0.032}_{-0.052}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا