No Arabic abstract
We present a comparison between two optical cluster finding methods: a matched filter algorithm using galaxy angular coordinates and magnitudes, and a percolation algorithm using also redshift information. We test the algorithms on two mock catalogues. The first mock catalogue is built by adding clusters to a Poissonian background, while the other is derived from N-body simulations. Choosing the physically most sensible parameters for each method, we carry out a detailed comparison and investigate advantages and limits of each algorithm, showing the possible biases on final results. We show that, combining the two methods, we are able to detect a large part of the structures, thus pointing out the need to search for clusters in different ways in order to build complete and unbiased samples of clusters, to be used for statistical and cosmological studies. In addition, our results show the importance of testing cluster finding algorithms on different kinds of mock catalogues to have a complete assessment of their behaviour.
We present an optically selected galaxy cluster catalog from ~ 2,700 square degrees of the Digitized Second Palomar Observatory Sky Survey (DPOSS), spanning the redshift range 0.1 < z < 0.5, providing an intermediate redshift supplement to the previous DPOSS cluster survey. This new catalog contains 9,956 cluster candidates and is the largest resource of rich clusters in this redshift range to date. The candidates are detected using the best DPOSS plates based on seeing and limiting magnitude. The search is further restricted to high galactic latitude (|b| > 50), where stellar contamination is modest and nearly uniform. We also present a performance comparison of two different detection methods applied to this data, the Adaptive Kernel and Voronoi Tessellation techniques. In the regime where both catalogs are expected to be complete, we find excellent agreement, as well as with the most recent surveys in the literature. Extensive simulations are performed and applied to the two different methods, indicating a contamination rate of ~ 5%. These simulations are also used to optimize the algorithms and evaluate the selection function for the final cluster catalog. Redshift and richness estimates are also provided, making possible the selection of subsamples for future studies.
We present a novel technique to overcome the limitations of the applicability of Principal Component Analysis to typical real-life data sets, especially astronomical spectra. Our new approach addresses the issues of outliers, missing information, large number of dimensions and the vast amount of data by combining elements of robust statistics and recursive algorithms that provide improved eigensystem estimates step-by-step. We develop a generic mechanism for deriving reliable eigenspectra without manual data censoring, while utilising all the information contained in the observations. We demonstrate the power of the methodology on the attractive collection of the VIMOS VLT Deep Survey spectra that manifest most of the challenges today, and highlight the improvements over previous workarounds, as well as the scalability of our approach to collections with sizes of the Sloan Digital Sky Survey and beyond.
We evaluate the construction methodology of an all-sky catalogue of galaxy clusters detected through the Sunyaev-Zeldovich (SZ) effect. We perform an extensive comparison of twelve algorithms applied to the same detailed simulations of the millimeter and submillimeter sky based on a Planck-like case. We present the results of this SZ Challenge in terms of catalogue completeness, purity, astrometric and photometric reconstruction. Our results provide a comparison of a representative sample of SZ detection algorithms and highlight important issues in their application. In our study case, we show that the exact expected number of clusters remains uncertain (about a thousand cluster candidates at |b|> 20 deg with 90% purity) and that it depends on the SZ model and on the detailed sky simulations, and on algorithmic implementation of the detection methods. We also estimate the astrometric precision of the cluster candidates which is found of the order of ~2 arcmins on average, and the photometric uncertainty of order ~30%, depending on flux.
We present a galaxy catalog simulator which turns N-body simulations with subhalos into multiband photometric mocks. The simulator assigns galaxy properties to each subhalo to reproduce the observed cluster galaxy halo occupation distribution, the radial and mass dependent variation in fractions of blue galaxies, the luminosity functions in clusters and the field, and the red-sequence in clusters. Moreover, the evolution of these parameters is tuned to match existing observational constraints. Field galaxies are sampled from existing multiband photometric surveys using derived galaxy photometric redshifts. Parametrizing an ensemble of cluster galaxy properties enables us to create mock catalogs with variations in those properties, which in turn allows us to quantify the sensitivity of cluster finding to current observational uncertainties in these properties. We present an application of the catalog simulator to characterize the selection function of a galaxy cluster finder that utilizes the cluster red-sequence galaxy clustering on the sky, in terms of completeness and contamination. We estimate systematic uncertainties due to the observational uncertainties on our simulator parameters in determining the selection function using five different sets of modified catalogs. Our estimates indicate that these uncertainties are at the $le15$% level with current observational constraints on cluster galaxy populations and their evolution. In addition, we examine the $B_{gc}$ parameter as an optical mass indicator and measure the intrinsic scatter of the $B_{gc}$--mass relation to be approximately log normal with $sigma_{log_{10}M}sim0.25$. Finally, we present tests of a red sequence overdensity redshift estimator using both simulated and real data, showing that it delivers redshifts for massive clusters with $sim$2% accuracy out to redshifts $zsim0.5$ with SDSS-like datasets.
Exploration of new superconductors still relies on the experience and intuition of experts and is largely a process of experimental trial and error. In one study, only 3% of the candidate materials showed superconductivity. Here, we report the first deep learning model for finding new superconductors. We introduced the method named reading periodic table which represented the periodic table in a way that allows deep learning to learn to read the periodic table and to learn the law of elements for the purpose of discovering novel superconductors that are outside the training data. It is recognized that it is difficult for deep learning to predict something outside the training data. Although we used only the chemical composition of materials as information, we obtained an $R^{2}$ value of 0.92 for predicting $T_text{c}$ for materials in a database of superconductors. We also introduced the method named garbage-in to create synthetic data of non-superconductors that do not exist. Non-superconductors are not reported, but the data must be required for deep learning to distinguish between superconductors and non-superconductors. We obtained three remarkable results. The deep learning can predict superconductivity for a material with a precision of 62%, which shows the usefulness of the model; it found the recently discovered superconductor CaBi2 and another one Hf0.5Nb0.2V2Zr0.3, neither of which is in the superconductor database; and it found Fe-based high-temperature superconductors (discovered in 2008) from the training data before 2008. These results open the way for the discovery of new high-temperature superconductor families. The candidate materials list, data, and method are openly available from the link https://github.com/tomo835g/Deep-Learning-to-find-Superconductors.