No Arabic abstract
The unprecedented amount and the excellent quality of lensing data that the upcoming ground- and space-based surveys will produce represent a great opportunity to shed light on the questions that still remain unanswered concerning our universe and the validity of the standard $Lambda$CDM cosmological model. Therefore, it is important to develop new techniques that can exploit the huge quantity of data that future observations will give us access to in the most effective way possible. For this reason, we decided to investigate the development of a new method to treat weak lensing higher order statistics, which are known to break degeneracy among cosmological parameters thanks to their capability of probing the non-Gaussian properties of the shear field. In particular, the proposed method directly applies to the observed quantity, i.e., the noisy galaxy ellipticity. We produced simulated lensing maps with different sets of cosmological parameters and used them to measure higher order moments, Minkowski functionals, Betti numbers, and other statistics related to graph theory. This allowed us to construct datasets with different size, precision, and smoothing. We then applied several machine learning algorithms to determine which method best predicts the actual cosmological parameters associated with each simulation. The best model resulted to be simple multidimensional linear regression. We used this model to compare the results coming from the different datasets and found out that we can measure with good accuracy the majority of the parameters that we considered. We also investigated the relation between each higher order estimator and the different cosmological parameters for several signal-to-noise thresholds and redshifts bins. Given the promising results, we consider this approach as a valuable resource, worth of further development.
The statistics of shear peaks have been shown to provide valuable cosmological information beyond the power spectrum, and will be an important constraint of models of cosmology with the large survey areas provided by forthcoming astronomical surveys. Surveys include masked areas due to bright stars, bad pixels etc, which must be accounted for in producing constraints on cosmology from shear maps. We advocate a forward-modeling approach, where the impact of masking (and other survey artifacts) are accounted for in the theoretical prediction of cosmological parameters, rather than removed from survey data. We use masks based on the Deep Lens Survey, and explore the impact of up to 37% of the survey area being masked on LSST and DES-scale surveys. By reconstructing maps of aperture mass, the masking effect is smoothed out, resulting in up to 14% smaller statistical uncertainties compared to simply reducing the survey area by the masked area. We show that, even in the presence of large survey masks, the bias in cosmological parameter estimation produced in the forward-modeling process is ~1%, dominated by bias caused by limited simulation volume. We also explore how this potential bias scales with survey area and find that small survey areas are more significantly impacted by the differences in cosmological structure in the data and simulated volumes, due to cosmic variance.
We present a new shear calibration method based on machine learning. The method estimates the individual shear responses of the objects from the combination of several measured properties on the images using supervised learning. The supervised learning uses the true individual shear responses obtained from copies of the image simulations with different shear values. On simulated GREAT3data, we obtain a residual bias after the calibration compatible with 0 and beyond Euclid requirements for a signal-to-noise ratio > 20 within ~15 CPU hours of training using only ~10^5 objects. This efficient machine-learning approach can use a smaller data set because the method avoids the contribution from shape noise. The low dimensionality of the input data also leads to simple neural network architectures. We compare it to the recently described method Metacalibration, which shows similar performances. The different methods and systematics suggest that the two methods are very good complementary methods. Our method can therefore be applied without much effort to any survey such as Euclid or the Vera C. Rubin Observatory, with fewer than a million images to simulate to learn the calibration function.
We establish for the first time heuristic correlations between harmonic space phase information and higher order statistics. Using the spherical full-sky maps of the cosmic microwave background as an example we demonstrate that known phase correlations at large spatial scales can gradually be diminished when subtracting a suitable best-fit (Bianchi-) template map of given strength. The weaker phase correlations lead in turn to a vanishing signature of anisotropy when measuring the Minkowski functionals and scaling indices in real-space and comparing them with surrogate maps being free of phase correlations. Those investigations can open a new road to a better understanding of signatures of non-Gaussianities in complex spatial structures by elucidating the meaning of Fourier phase correlations and their influence on higher order statistics.
A higher-order topological insulator is a new concept of topological states of matter, which is characterized by the emergent boundary states whose dimensionality is lower by more than two compared with that of the bulk, and draws a considerable interest. Yet, its robustness against disorders is still unclear. Here we investigate a phase diagram of higher-order topological insulator phases in a breathing kagome model in the presence of disorders, by using a state-of-the-art machine learning technique. We find that the corner states survive against the finite strength of disorder potential as long as the energy gap is not closed, indicating the stability of the higher-order topological phases against the disorders.
We present a modern machine learning approach for cluster dynamical mass measurements that is a factor of two improvement over using a conventional scaling relation. Different methods are tested against a mock cluster catalog constructed using halos with mass >= 10^14 Msolar/h from Multidarks publicly-available N-body MDPL halo catalog. In the conventional method, we use a standard M(sigma_v) power law scaling relation to infer cluster mass, M, from line-of-sight (LOS) galaxy velocity dispersion, sigma_v. The resulting fractional mass error distribution is broad, with width=0.87 (68% scatter), and has extended high-error tails. The standard scaling relation can be simply enhanced by including higher-order moments of the LOS velocity distribution. Applying the kurtosis as a correction term to log(sigma_v) reduces the width of the error distribution to 0.74 (16% improvement). Machine learning can be used to take full advantage of all the information in the velocity distribution. We employ the Support Distribution Machines (SDMs) algorithm that learns from distributions of data to predict single values. SDMs trained and tested on the distribution of LOS velocities yield width=0.46 (47% improvement). Furthermore, the problematic tails of the mass error distribution are effectively eliminated. Decreasing cluster mass errors will improve measurements of the growth of structure and lead to tighter constraints on cosmological parameters.