ترغب بنشر مسار تعليمي؟ اضغط هنا

Galaxy Zoo Supernovae

123   0   0.0 ( 0 )
 نشر من قبل Arfon Smith
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents the first results from a new citizen science project: Galaxy Zoo Supernovae. This proof of concept project uses members of the public to identify supernova candidates from the latest generation of wide-field imaging transient surveys. We describe the Galaxy Zoo Supernovae operations and scoring model, and demonstrate the effectiveness of this novel method using imaging data and transients from the Palomar Transient Factory (PTF). We examine the results collected over the period April-July 2010, during which nearly 14,000 supernova candidates from PTF were classified by more than 2,500 individuals within a few hours of data collection. We compare the transients selected by the citizen scientists to those identified by experienced PTF scanners, and find the agreement to be remarkable - Galaxy Zoo Supernovae performs comparably to the PTF scanners, and identified as transients 93% of the ~130 spectroscopically confirmed SNe that PTF located during the trial period (with no false positive identifications). Further analysis shows that only a small fraction of the lowest signal-to-noise SN detections (r > 19.5) are given low scores: Galaxy Zoo Supernovae correctly identifies all SNe with > 8{sigma} detections in the PTF imaging data. The Galaxy Zoo Supernovae project has direct applicability to future transient searches such as the Large Synoptic Survey Telescope, by both rapidly identifying candidate transient events, and via the training and improvement of existing machine classifier algorithms.



قيم البحث

اقرأ أيضاً

We provide a brief overview of the Galaxy Zoo and Zooniverse projects, including a short discussion of the history of, and motivation for, these projects as well as reviewing the science these innovative internet-based citizen science projects have p roduced so far. We briefly describe the method of applying en-masse human pattern recognition capabilities to complex data in data-intensive research. We also provide a discussion of the lessons learned from developing and running these community--based projects including thoughts on future applications of this methodology. This review is intended to give the reader a quick and simple introduction to the Zooniverse.
We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Autom ated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data.
We present the data release paper for the Galaxy Zoo: Hubble (GZH) project. This is the third phase in a large effort to measure reliable, detailed morphologies of galaxies by using crowdsourced visual classifications of colour composite images. Imag es in GZH were selected from various publicly-released Hubble Space Telescope Legacy programs conducted with the Advanced Camera for Surveys, with filters that probe the rest-frame optical emission from galaxies out to $z sim 1$. The bulk of the sample is selected to have $m_{I814W} < 23.5$,but goes as faint as $m_{I814W} < 26.8$ for deep images combined over 5 epochs. The median redshift of the combined samples is $z = 0.9 pm 0.6$, with a tail extending out to $z sim 4$. The GZH morphological data include measurements of both bulge- and disk-dominated galaxies, details on spiral disk structure that relate to the Hubble type, bar identification, and numerous measurements of clump identification and geometry. This paper also describes a new method for calibrating morphologies for galaxies of different luminosities and at different redshifts by using artificially-redshifted galaxy images as a baseline. The GZH catalogue contains both raw and calibrated morphological vote fractions for 119,849 galaxies, providing the largest dataset to date suitable for large-scale studies of galaxy evolution out to $z sim 1$.
With the advent of large scale surveys the manual analysis and classification of individual radio source morphologies is rendered impossible as existing approaches do not scale. The analysis of complex morphological features in the spatial domain is a particularly important task. Here we discuss the challenges of transferring crowdsourced labels obtained from the Radio Galaxy Zoo project and introduce a proper transfer mechanism via quantile random forest regression. By using parallelized rotation and flipping invariant Kohonen-maps, image cubes of Radio Galaxy Zoo selected galaxies formed from the FIRST radio continuum and WISE infrared all sky surveys are first projected down to a two-dimensional embedding in an unsupervised way. This embedding can be seen as a discretised space of shapes with the coordinates reflecting morphological features as expressed by the automatically derived prototypes. We find that these prototypes have reconstructed physically meaningful processes across two channel images at radio and infrared wavelengths in an unsupervised manner. In the second step, images are compared with those prototypes to create a heat-map, which is the morphological fingerprint of each object and the basis for transferring the user generated labels. These heat-maps have reduced the feature space by a factor of 248 and are able to be used as the basis for subsequent ML methods. Using an ensemble of decision trees we achieve upwards of 85.7% and 80.7% accuracy when predicting the number of components and peaks in an image, respectively, using these heat-maps. We also question the currently used discrete classification schema and introduce a continuous scale that better reflects the uncertainty in transition between two classes, caused by sensitivity and resolution limits.
The upcoming next-generation large area radio continuum surveys can expect tens of millions of radio sources, rendering the traditional method for radio morphology classification through visual inspection unfeasible. We present ClaRAN - Classifying R adio sources Automatically with Neural networks - a proof-of-concept radio source morphology classifier based upon the Faster Region-based Convolutional Neutral Networks (Faster R-CNN) method. Specifically, we train and test ClaRAN on the FIRST and WISE images from the Radio Galaxy Zoo Data Release 1 catalogue. ClaRAN provides end users with automated identification of radio source morphology classifications from a simple input of a radio image and a counterpart infrared image of the same region. ClaRAN is the first open-source, end-to-end radio source morphology classifier that is capable of locating and associating discrete and extended components of radio sources in a fast (< 200 milliseconds per image) and accurate (>= 90 %) fashion. Future work will improve ClaRANs relatively lower success rates in dealing with multi-source fields and will enable ClaRAN to identify sources on much larger fields without loss in classification accuracy.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا