ترغب بنشر مسار تعليمي؟ اضغط هنا

We review some aspects of the current state of data-intensive astronomy, its methods, and some outstanding data analysis challenges. Astronomy is at the forefront of big data science, with exponentially growing data volumes and data rates, and an eve r-increasing complexity, now entering the Petascale regime. Telescopes and observatories from both ground and space, covering a full range of wavelengths, feed the data via processing pipelines into dedicated archives, where they can be accessed for scientific analysis. Most of the large archives are connected through the Virtual Observatory framework, that provides interoperability standards and services, and effectively constitutes a global data grid of astronomy. Making discoveries in this overabundance of data requires applications of novel, machine learning tools. We describe some of the recent examples of such applications.
We forecast dark energy constraints that could be obtained from a new large sample of Type Ia supernovae where those at high redshift are acquired with the Euclid space mission. We simulate a three-prong SN survey: a z<0.35 nearby sample (8000 SNe), a 0.2<z<0.95 intermediate sample (8800 SNe), and a 0.75<z<1.55 high-z sample (1700 SNe). The nearby and intermediate surveys are assumed to be conducted from the ground, while the high-z is a joint ground- and space-based survey. This latter survey, the Dark Energy Supernova Infra-Red Experiment (DESIRE), is designed to fit within 6 months of Euclid observing time, with a dedicated observing program. We simulate the SN events as they would be observed in rolling-search mode by the various instruments, and derive the quality of expected cosmological constraints. We account for known systematic uncertainties, in particular calibration uncertainties including their contribution through the training of the supernova model used to fit the supernovae light curves. Using conservative assumptions and a 1-D geometric Planck prior, we find that the ensemble of surveys would yield competitive constraints: a constant equation of state parameter can be constrained to sigma(w)=0.022, and a Dark Energy Task Force figure of merit of 203 is found for a two-parameter equation of state. Our simulations thus indicate that Euclid can bring a significant contribution to a purely geometrical cosmology constraint by extending a high-quality SN Hubble diagram to z~1.5. We also present other science topics enabled by the DESIRE Euclid observations
We present an application of self-adaptive supervised learning classifiers derived from the Machine Learning paradigm, to the identification of candidate Globular Clusters in deep, wide-field, single band HST images. Several methods provided by the D AME (Data Mining & Exploration) web application, were tested and compared on the NGC1399 HST data described in Paolillo 2011. The best results were obtained using a Multi Layer Perceptron with Quasi Newton learning rule which achieved a classification accuracy of 98.3%, with a completeness of 97.8% and 1.6% of contamination. An extensive set of experiments revealed that the use of accurate structural parameters (effective radius, central surface brightness) does improve the final result, but only by 5%. It is also shown that the method is capable to retrieve also extreme sources (for instance, very extended objects) which are missed by more traditional approaches.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا