Morphological Star-Galaxy Separation


الملخص بالإنكليزية

We discuss the statistical foundations of morphological star-galaxy separation. We show that many of the star-galaxy separation metrics in common use today (e.g. by SDSS or SExtractor) are closely related both to each other, and to the model odds ratio derived in a Bayesian framework by Sebok (1979). While the scaling of these algorithms with the noise properties of the sources varies, these differences do not strongly differentiate their performance. We construct a model of the performance of a star-galaxy separator in a realistic survey to understand the impact of observational signal-to-noise ratio (or equivalently, 5-sigma limiting depth) and seeing on classification performance. The model quantitatively demonstrates that, assuming realistic densities and angular sizes of stars and galaxies, 10% worse seeing can be compensated for by approximately 0.4 magnitudes deeper data to achieve the same star-galaxy classification performance. We discuss how to probabilistically combine multiple measurements, either of the same type (e.g., subsequent exposures), or differing types (e.g., multiple bandpasses), or differing methodologies (e.g., morphological and color-based classification). These methods are increasingly important for observations at faint magnitudes, where the rapidly rising number density of small galaxies makes star-galaxy classification a challenging problem. However, because of the significant role that the signal-to-noise ratio plays in resolving small galaxies, surveys with large-aperture telescopes, such as LSST, will continue to see improving star-galaxy separation as they push to these fainter magnitudes.

تحميل البحث