ترغب بنشر مسار تعليمي؟ اضغط هنا

The MICROSCOPE space mission, launched on April 25, 2016, aims to test the weak equivalence principle (WEP) with a 10^-15 precision. To reach this performance requires an accurate and robust data analysis method, especially since the possible WEP vio lation signal will be dominated by a strongly colored noise. An important complication is brought by the fact that some values will be missing -therefore, the measured time series will not be strictly regularly sampled. Those missing values induce a spectral leakage that significantly increases the noise in Fourier space, where the WEP violation signal is looked for, thereby complicating scientific returns. Recently, we developed an inpainting algorithm to correct the MICROSCOPE data for missing values. This code has been integrated in the official MICROSCOPE data processing pipeline because it enables us to significantly measure an equivalence principle violation (EPV) signal in a model-independent way, in the inertial satellite configuration. In this work, we present several improvements to the method that may allow us now to reach the MICROSCOPE requirements for both inertial and spin satellite configurations. The main improvement has been obtained using a prior on the power spectrum of the colored-noise that can be directly derived from the incomplete data. We show that after reconstructing missing values with this new algorithm, a least-squares fit may allow us to significantly measure an EPV signal with a 0.96x10^-15 precision in the inertial mode and 1.2x10^-15 precision in the spin mode. Although, the inpainting method presented in this paper has been optimized to the MICROSCOPE data, it remains sufficiently general to be used in the general context of missing data in time series dominated by an unknown colored-noise. The improved inpainting software, called ICON, is freely available at http://www.cosmostat.org/software/icon.
This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological informatio n extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density $Omega_mathrm{m}$, the power spectrum normalisation $sigma_8$, and the dark-energy parameter $w_0$. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the $Omega_mathrm{m}$-$sigma_8$ degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.
Missing data are a common problem in experimental and observational physics. They can be caused by various sources, either an instruments saturation, or a contamination from an external event, or a data loss. In particular, they can have a disastrous effect when one is seeking to characterize a colored-noise-dominated signal in Fourier space, since they create a spectral leakage that can artificially increase the noise. It is therefore important to either take them into account or to correct for them prior to e.g. a Least-Square fit of the signal to be characterized. In this paper, we present an application of the {it inpainting} algorithm to mock MICROSCOPE data; {it inpainting} is based on a sparsity assumption, and has already been used in various astrophysical contexts; MICROSCOPE is a French Space Agency mission, whose launch is expected in 2016, that aims to test the Weak Equivalence Principle down to the $10^{-15}$ level. We then explore the {it inpainting} dependence on the number of gaps and the total fraction of missing values. We show that, in a worst-case scenario, after reconstructing missing values with {it inpainting}, a Least-Square fit may allow us to significantly measure a $1.1times10^{-15}$ Equivalence Principle violation signal, which is sufficiently close to the MICROSCOPE requirements to implement {it inpainting} in the official MICROSCOPE data processing and analysis pipeline. Together with the previously published KARMA method, {it inpainting} will then allow us to independently characterize and cross-check an Equivalence Principle violation signal detection down to the $10^{-15}$ level.
In asteroseismology, the observed time series often suffers from incomplete time coverage due to gaps. The presence of periodic gaps may generate spurious peaks in the power spectrum that limit the analysis of the data. Various methods have been deve loped to deal with gaps in time series data. However, it is still important to improve these methods to be able to extract all the possible information contained in the data. In this paper, we propose a new approach to handle the problem, the so-called inpainting method. This technique, based on a sparsity prior, enables to judiciously fill-in the gaps in the data, preserving the asteroseismic signal, as far as possible. The impact of the observational window function is reduced and the interpretation of the power spectrum is simplified. This method is applied both on ground and space-based data. It appears that the inpainting technique improves the oscillation modes detection and estimation. Additionally, it can be used to study very long time series of many stars because its computation is very fast. For a time series of 50 days of CoRoT-like data, it allows a speed-up factor of 1000, if compared to methods of the same accuracy.
In this paper, we compare three methods to reconstruct galaxy cluster density fields with weak lensing data. The first method called FLens integrates an inpainting concept to invert the shear field with possible gaps, and a multi-scale entropy denois ing procedure to remove the noise contained in the final reconstruction, that arises mostly from the random intrinsic shape of the galaxies. The second and third methods are based on a model of the density field made of a multi-scale grid of radial basis functions. In one case, the model parameters are computed with a linear inversion involving a singular value decomposition. In the other case, the model parameters are estimated using a Bayesian MCMC optimization implemented in the lensing software Lenstool. Methods are compared on simulated data with varying galaxy density fields. We pay particular attention to the errors estimated with resampling. We find the multi-scale grid model optimized with MCMC to provide the best results, but at high computational cost, especially when considering resampling. The SVD method is much faster but yields noisy maps, although this can be mitigated with resampling. The FLens method is a good compromise with fast computation, high signal to noise reconstruction, but lower resolution maps. All three methods are applied to the MACS J0717+3745 galaxy cluster field, and reveal the filamentary structure discovered in Jauzac et al. 2012. We conclude that sensitive priors can help to get high signal to noise, and unbiased reconstructions.
We have performed a 70 billion dark-matter particles N-body simulation in a 2 $h^{-1}$ Gpc periodic box, using the concordance, cosmological model as favored by the latest WMAP3 results. We have computed a full-sky convergence map with a resolution o f $Delta theta simeq 0.74$ arcmin$^{2}$, spanning 4 orders of magnitude in angular dynamical range. Using various high-order statistics on a realistic cut sky, we have characterized the transition from the linear to the nonlinear regime at $ell simeq 1000$ and shown that realistic galactic masking affects high-order moments only below $ell < 200$. Each domain (Gaussian and non-Gaussian) spans 2 decades in angular scale. This map is therefore an ideal tool for testing map-making algorithms on the sphere. As a first step in addressing the full map reconstruction problem, we have benchmarked in this paper two denoising methods: 1) Wiener filtering applied to the Spherical Harmonics decomposition of the map and 2) a new method, called MRLens, based on the modification of the Maximum Entropy Method on a Wavelet decomposition. While the latter is optimal on large spatial scales, where the signal is Gaussian, MRLens outperforms the Wiener method on small spatial scales, where the signal is highly non-Gaussian. The simulated full-sky convergence map is freely available to the community to help the development of new map-making algorithms dedicated to the next generation of weak-lensing surveys.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا