ترغب بنشر مسار تعليمي؟ اضغط هنا

Morphological classification of astronomical images with limited labelling

69   0   0.0 ( 0 )
 نشر من قبل Andrew Soroka
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

The task of morphological classification is complex for simple parameterization, but important for research in the galaxy evolution field. Future galaxy surveys (e.g. EUCLID) will collect data about more than a $10^9$ galaxies. To obtain morphological information one needs to involve people to mark up galaxy images, which requires either a considerable amount of money or a huge number of volunteers. We propose an effective semi-supervised approach for galaxy morphology classification task, based on active learning of adversarial autoencoder (AAE) model. For a binary classification problem (top level question of Galaxy Zoo 2 decision tree) we achieved accuracy 93.1% on the test part with only 0.86 millions markup actions, this model can easily scale up on any number of images. Our best model with additional markup achieves accuracy of 95.5%. To the best of our knowledge it is a first time AAE semi-supervised learning model used in astronomy.



قيم البحث

اقرأ أيضاً

We present AstroVaDEr, a variational autoencoder designed to perform unsupervised clustering and synthetic image generation using astronomical imaging catalogues. The model is a convolutional neural network that learns to embed images into a low dime nsional latent space, and simultaneously optimises a Gaussian Mixture Model (GMM) on the embedded vectors to cluster the training data. By utilising variational inference, we are able to use the learned GMM as a statistical prior on the latent space to facilitate random sampling and generation of synthetic images. We demonstrate AstroVaDErs capabilities by training it on gray-scaled textit{gri} images from the Sloan Digital Sky Survey, using a sample of galaxies that are classified by Galaxy Zoo 2. An unsupervised clustering model is found which separates galaxies based on learned morphological features such as axis ratio, surface brightness profile, orientation and the presence of companions. We use the learned mixture model to generate synthetic images of galaxies based on the morphological profiles of the Gaussian components. AstroVaDEr succeeds in producing a morphological classification scheme from unlabelled data, but unexpectedly places high importance on the presence of companion objects---demonstrating the importance of human interpretation. The network is scalable and flexible, allowing for larger datasets to be classified, or different kinds of imaging data. We also demonstrate the generative properties of the model, which allow for realistic synthetic images of galaxies to be sampled from the learned classification scheme. These can be used to create synthetic image catalogs or to perform image processing tasks such as deblending.
Deep learning models achieve strong performance for radiology image classification, but their practical application is bottlenecked by the need for large labeled training datasets. Semi-supervised learning (SSL) approaches leverage small labeled data sets alongside larger unlabeled datasets and offer potential for reducing labeling cost. In this work, we introduce NoTeacher, a novel consistency-based SSL framework which incorporates probabilistic graphical models. Unlike Mean Teacher which maintains a teacher network updated via a temporal ensemble, NoTeacher employs two independent networks, thereby eliminating the need for a teacher network. We demonstrate how NoTeacher can be customized to handle a range of challenges in radiology image classification. Specifically, we describe adaptations for scenarios with 2D and 3D inputs, uni and multi-label classification, and class distribution mismatch between labeled and unlabeled portions of the training data. In realistic empirical evaluations on three public benchmark datasets spanning the workhorse modalities of radiology (X-Ray, CT, MRI), we show that NoTeacher achieves over 90-95% of the fully supervised AUROC with less than 5-15% labeling budget. Further, NoTeacher outperforms established SSL methods with minimal hyperparameter tuning, and has implications as a principled and practical option for semisupervised learning in radiology applications.
An increasing number of applications in the computer vision domain, specially, in medical imaging and remote sensing, are challenging when the goal is to classify very large images with tiny objects. More specifically, these type of classification ta sks face two key challenges: $i$) the size of the input image in the target dataset is usually in the order of megapixels, however, existing deep architectures do not easily operate on such big images due to memory constraints, consequently, we seek a memory-efficient method to process these images; and $ii$) only a small fraction of the input images are informative of the label of interest, resulting in low region of interest (ROI) to image ratio. However, most of the current convolutional neural networks (CNNs) are designed for image classification datasets that have relatively large ROIs and small image size (sub-megapixel). Existing approaches have addressed these two challenges in isolation. We present an end-to-end CNN model termed Zoom-In network that leverages hierarchical attention sampling for classification of large images with tiny objects using a single GPU. We evaluate our method on two large-image datasets and one gigapixel dataset. Experimental results show that our model achieves higher accuracy than existing methods while requiring less computing resources.
Vertebral labelling and segmentation are two fundamental tasks in an automated spine processing pipeline. Reliable and accurate processing of spine images is expected to benefit clinical decision-support systems for diagnosis, surgery planning, and p opulation-based analysis on spine and bone health. However, designing automated algorithms for spine processing is challenging predominantly due to considerable variations in anatomy and acquisition protocols and due to a severe shortage of publicly available data. Addressing these limitations, the Large Scale Vertebrae Segmentation Challenge (VerSe) was organised in conjunction with the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in 2019 and 2020, with a call for algorithms towards labelling and segmentation of vertebrae. Two datasets containing a total of 374 multi-detector CT scans from 355 patients were prepared and 4505 vertebrae have individually been annotated at voxel-level by a human-machine hybrid algorithm (https://osf.io/nqjyw/, https://osf.io/t98fz/). A total of 25 algorithms were benchmarked on these datasets. In this work, we present the the results of this evaluation and further investigate the performance-variation at vertebra-level, scan-level, and at different fields-of-view. We also evaluate the generalisability of the approaches to an implicit domain shift in data by evaluating the top performing algorithms of one challenge iteration on data from the other iteration. The principal takeaway from VerSe: the performance of an algorithm in labelling and segmenting a spine scan hinges on its ability to correctly identify vertebrae in cases of rare anatomical variations. The content and code concerning VerSe can be accessed at: https://github.com/anjany/verse.
Purpose: To use and test a labelling algorithm that operates on two-dimensional (2D) reformations, rather than three-dimensional (3D) data to locate and identify vertebrae. Methods: We improved the Btrfly Net (described by Sekuboyina et al) that wo rks on sagittal and coronal maximum intensity projections (MIP) and augmented it with two additional components: spine-localization and adversarial a priori-learning. Furthermore, we explored two variants of adversarial training schemes that incorporated the anatomical a priori knowledge into the Btrfly Net. We investigated the superiority of the proposed approach for labelling vertebrae on three datasets: a public benchmarking dataset of 302 CT scans and two in-house datasets with a total of 238 CT scans. We employed Wilcoxon signed-rank test to compute the statistical significance of the improvement in performance observed due to various architectural components in our approach. Results: On the public dataset, our approach using the described Btrfly(pe-eb) network performed on par with current state-of-the-art methods achieving a statistically significant (p < .001) vertebrae identification rate of 88.5+/-0.2 % and localization distances of less than 7-mm. On the in-house datasets that had a higher inter-scan data variability, we obtained an identification rate of 85.1+/-1.2%. Conclusion: An identification performance comparable to existing 3D approaches was achieved when labelling vertebrae on 2D MIPs. The performance was further improved using the proposed adversarial training regime that effectively enforced local spine a priori knowledge during training. Lastly, spine-localization increased the generalizability of our approach by homogenizing the content in the MIPs.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا