ترغب بنشر مسار تعليمي؟ اضغط هنا

Cosmic String Detection with Tree-Based Machine Learning

103   0   0.0 ( 0 )
 نشر من قبل Sadegh Movahed
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We explore the use of random forest and gradient boosting, two powerful tree-based machine learning algorithms, for the detection of cosmic strings in maps of the cosmic microwave background (CMB), through their unique Gott-Kaiser-Stebbins effect on the temperature anisotropies.The information in the maps is compressed into feature vectors before being passed to the learning units. The feature vectors contain various statistical measures of processed CMB maps that boost the cosmic string detectability. Our proposed classifiers, after training, give results improved over or similar to the claimed detectability levels of the existing methods for string tension, $Gmu$. They can make $3sigma$ detection of strings with $Gmu gtrsim 2.1times 10^{-10}$ for noise-free, $0.9$-resolution CMB observations. The minimum detectable tension increases to $Gmu gtrsim 3.0times 10^{-8}$ for a more realistic, CMB S4-like (II) strategy, still a significant improvement over the previous results.



قيم البحث

اقرأ أيضاً

Upcoming 21cm surveys will map the spatial distribution of cosmic neutral hydrogen (HI) over unprecedented volumes. Mock catalogues are needed to fully exploit the potential of these surveys. Standard techniques employed to create these mock catalogs , like Halo Occupation Distribution (HOD), rely on assumptions such as the baryonic properties of dark matter halos only depend on their masses. In this work, we use the state-of-the-art magneto-hydrodynamic simulation IllustrisTNG to show that the HI content of halos exhibits a strong dependence on their local environment. We then use machine learning techniques to show that this effect can be 1) modeled by these algorithms and 2) parametrized in the form of novel analytic equations. We provide physical explanations for this environmental effect and show that ignoring it leads to underprediction of the real-space 21-cm power spectrum at $kgtrsim 0.05$ h/Mpc by $gtrsim$10%, which is larger than the expected precision from upcoming surveys on such large scales. Our methodology of combining numerical simulations with machine learning techniques is general, and opens a new direction at modeling and parametrizing the complex physics of assembly bias needed to generate accurate mocks for galaxy and line intensity mapping surveys.
The Ninja data analysis challenge allowed the study of the sensitivity of data analysis pipelines to binary black hole numerical relativity waveforms in simulated Gaussian noise at the design level of the LIGO observatory and the VIRGO observatory. W e analyzed NINJA data with a pipeline based on the Hilbert Huang Transform, utilizing a detection stage and a characterization stage: detection is performed by triggering on excess instantaneous power, characterization is performed by displaying the kernel density enhanced (KD) time-frequency trace of the signal. Using the simulated data based on the two LIGO detectors, we were able to detect 77 signals out of 126 above SNR 5 in coincidence, with 43 missed events characterized by signal to noise ratio SNR less than 10. Characterization of the detected signals revealed the merger part of the waveform in high time and frequency resolution, free from time-frequency uncertainty. We estimated the timelag of the signals between the detectors based on the optimal overlap of the individual KD time-frequency maps, yielding estimates accurate within a fraction of a millisecond for half of the events. A coherent addition of the data sets according to the estimated timelag eventually was used in a characterization of the event.
We introduce a new machine learning based technique to detect exoplanets using the transit method. Machine learning and deep learning techniques have proven to be broadly applicable in various scientific research areas. We aim to exploit some of thes e methods to improve the conventional algorithm based approaches presently used in astrophysics to detect exoplanets. Using the time-series analysis library TSFresh to analyse light curves, we extracted 789 features from each curve, which capture the information about the characteristics of a light curve. We then used these features to train a gradient boosting classifier using the machine learning tool lightgbm. This approach was tested on simulated data, which showed that is more effective than the conventional box least squares fitting (BLS) method. We further found that our method produced comparable results to existing state-of-the-art deep learning models, while being much more computationally efficient and without needing folded and secondary views of the light curves. For Kepler data, the method is able to predict a planet with an AUC of 0.948, so that 94.8 per cent of the true planet signals are ranked higher than non-planet signals. The resulting recall is 0.96, so that 96 per cent of real planets are classified as planets. For the Transiting Exoplanet Survey Satellite (TESS) data, we found our method can classify light curves with an accuracy of 0.98, and is able to identify planets with a recall of 0.82 at a precision of 0.63.
Future surveys focusing on understanding the nature of dark energy (e.g., Euclid and WFIRST) will cover large fractions of the extragalactic sky in near-IR slitless spectroscopy. These surveys will detect a large number of galaxies that will have onl y one emission line in the covered spectral range. In order to maximize the scientific return of these missions, it is imperative that single emission lines are correctly identified. Using a supervised machine-learning approach, we classified a sample of single emission lines extracted from the WFC3 IR Spectroscopic Parallel survey (WISP), one of the closest existing analogs to future slitless surveys. Our automatic software integrates a SED fitting strategy with additional independent sources of information. We calibrated it and tested it on a gold sample of securely identified objects with multiple lines detected. The algorithm correctly classifies real emission lines with an accuracy of 82.6%, whereas the accuracy of the SED fitting technique alone is low (~50%) due to the limited amount of photometric data available (<=6 bands). While not specifically designed for the Euclid and WFIRST surveys, the algorithm represents an important precursor of similar algorithms to be used in these future missions.
The efficient classification of different types of supernova is one of the most important problems for observational cosmology. However, spectroscopic confirmation of most objects in upcoming photometric surveys, such as the The Rubin Observatory Leg acy Survey of Space and Time (LSST), will be unfeasible. The development of automated classification processes based on photometry has thus become crucial. In this paper we investigate the performance of machine learning (ML) classification on the final cosmological constraints using simulated lightcurves from The Supernova Photometric Classification Challenge, released in 2010. We study the use of different feature sets for the lightcurves and many different ML pipelines based on either decision tree ensembles or automated search processes. To construct the final catalogs we propose a threshold selection method, by employing a emph{Bias-Variance tradeoff}. This is a very robust and efficient way to minimize the Mean Squared Error. With this method we were able to get very strong cosmological constraints, which allowed us to keep $sim 75%$ of the total information in the type Ia SNe when using the SALT2 feature set and $sim 33%$ for the other cases (based on either the Newling model or on standard wavelet decomposition).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا