Do you want to publish a course? Click here

Multiband galaxy morphologies for CLASH: a convolutional neural network transferred from CANDELS

58   0   0.0 ( 0 )
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present visual-like morphologies over 16 photometric bands, from ultra-violet to near infrared, for 8,412 galaxies in the Cluster Lensing And Supernova survey with Hubble (CLASH) obtained by a convolutional neural network (CNN) model. Our model follows the CANDELS main morphological classification scheme, obtaining the probability for each galaxy at each CLASH band of being spheroid, disk, irregular, point source, or unclassifiable. Our catalog contains morphologies for each galaxy with Hmag < 24.5 in every filter where the galaxy is observed. We trained an initial CNN model using approximately 7,500 expert eyeball labels from The Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS). We created eyeball labels for 100 randomly selected galaxies per each of the 16-filters set of CLASH (1,600 galaxy images in total), where each image was classified by at least five of us. We use these labels to fine-tune the network in order to accurately predict labels for the CLASH data and to evaluate the performance of our model. We achieve a root-mean-square error of 0.0991 on the test set. We show that our proposed fine-tuning technique reduces the number of labeled images needed for training, as compared to directly training over the CLASH data, and achieves a better performance. This approach is very useful to minimize eyeball labeling efforts when classifying unlabeled data from new surveys. This will become particularly useful for massive datasets such as the ones coming from near future surveys such as EUCLID or the LSST. Our catalog consists of prediction of probabilities for each galaxy by morphology in their different bands and is made publicly available at http://www.inf.udec.cl/~guille/data/Deep-CLASH.csv.



rate research

Read More

We utilize techniques from deep learning to identify signatures of stellar feedback in simulated molecular clouds. Specifically, we implement a deep neural network with an architecture similar to U-Net and apply it to the problem of identifying wind-driven shells and bubbles using data from magneto-hydrodynamic simulations of turbulent molecular clouds with embedded stellar sources. The network is applied to two tasks, dense regression and segmentation, on two varieties of data, simulated density and synthetic 12 CO observations. Our Convolutional Approach for Shell Identification (CASI) is able to obtain a true positive rate greater than 90%, while maintaining a false positive rate of 1%, on two segmentation tasks and also performs well on related regression tasks. The source code for CASI is available on GitLab.
We present a novel method of classifying Type Ia supernovae using convolutional neural networks, a neural network framework typically used for image recognition. Our model is trained on photometric information only, eliminating the need for accurate redshift data. Photometric data is pre-processed via 2D Gaussian process regression into two-dimensional images created from flux values at each location in wavelength-time space. These flux heatmaps of each supernova detection, along with uncertainty heatmaps of the Gaussian process uncertainty, constitute the dataset for our model. This preprocessing step not only smooths over irregular sampling rates between filters but also allows SCONE to be independent of the filter set on which it was trained. Our model has achieved impressive performance without redshift on the in-distribution SNIa classification problem: $99.73 pm 0.26$% test accuracy with no over/underfitting on a subset of supernovae from PLAsTiCCs unblinded test dataset. We have also achieved $98.18 pm 0.3$% test accuracy performing 6-way classification of supernovae by type. The out-of-distribution performance does not fully match the in-distribution results, suggesting that the detailed characteristics of the training sample in comparison to the test sample have a big impact on the performance. We discuss the implication and directions for future work. All of the data processing and model code developed for this paper can be found in the SCONE software package located at github.com/helenqu/scone.
We examine morphology-separated color-mass diagrams to study the quenching of star formation in $sim 100,000$ ($zsim0$) Sloan Digital Sky Survey (SDSS) and $sim 20,000$ ($zsim1$) Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) galaxies. To classify galaxies morphologically, we developed Galaxy Morphology Network (GaMorNet), a convolutional neural network that classifies galaxies according to their bulge-to-total light ratio. GaMorNet does not need a large training set of real data and can be applied to data sets with a range of signal-to-noise ratios and spatial resolutions. GaMorNets source code as well as the trained models are made public as part of this work ( http://www.astro.yale.edu/aghosh/gamornet.html ). We first trained GaMorNet on simulations of galaxies with a bulge and a disk component and then transfer learned using $sim25%$ of each data set to achieve misclassification rates of $lesssim5%$. The misclassified sample of galaxies is dominated by small galaxies with low signal-to-noise ratios. Using the GaMorNet classifications, we find that bulge- and disk-dominated galaxies have distinct color-mass diagrams, in agreement with previous studies. For both SDSS and CANDELS galaxies, disk-dominated galaxies peak in the blue cloud, across a broad range of masses, consistent with the slow exhaustion of star-forming gas with no rapid quenching. A small population of red disks is found at high mass ($sim14%$ of disks at $zsim0$ and $2%$ of disks at $z sim 1$). In contrast, bulge-dominated galaxies are mostly red, with much smaller numbers down toward the blue cloud, suggesting rapid quenching and fast evolution across the green valley. This inferred difference in quenching mechanism is in agreement with previous studies that used other morphology classification techniques on much smaller samples at $zsim0$ and $zsim1$.
The upcoming next-generation large area radio continuum surveys can expect tens of millions of radio sources, rendering the traditional method for radio morphology classification through visual inspection unfeasible. We present ClaRAN - Classifying Radio sources Automatically with Neural networks - a proof-of-concept radio source morphology classifier based upon the Faster Region-based Convolutional Neutral Networks (Faster R-CNN) method. Specifically, we train and test ClaRAN on the FIRST and WISE images from the Radio Galaxy Zoo Data Release 1 catalogue. ClaRAN provides end users with automated identification of radio source morphology classifications from a simple input of a radio image and a counterpart infrared image of the same region. ClaRAN is the first open-source, end-to-end radio source morphology classifier that is capable of locating and associating discrete and extended components of radio sources in a fast (< 200 milliseconds per image) and accurate (>= 90 %) fashion. Future work will improve ClaRANs relatively lower success rates in dealing with multi-source fields and will enable ClaRAN to identify sources on much larger fields without loss in classification accuracy.
In convolutional neural network (CNN), dropout cannot work well because dropped information is not entirely obscured in convolutional layers where features are correlated spatially. Except randomly discarding regions or channels, many approaches try to overcome this defect by dropping influential units. In this paper, we propose a non-random dropout method named FocusedDropout, aiming to make the network focus more on the target. In FocusedDropout, we use a simple but effective way to search for the target-related features, retain these features and discard others, which is contrary to the existing methods. We found that this novel method can improve network performance by making the network more target-focused. Besides, increasing the weight decay while using FocusedDropout can avoid the overfitting and increase accuracy. Experimental results show that even a slight cost, 10% of batches employing FocusedDropout, can produce a nice performance boost over the baselines on multiple datasets of classification, including CIFAR10, CIFAR100, Tiny Imagenet, and has a good versatility for different CNN models.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا