ترغب بنشر مسار تعليمي؟ اضغط هنا

Automated crater shape retrieval using weakly-supervised deep learning

105   0   0.0 ( 0 )
 نشر من قبل Mohamad Ali-Dib
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Crater ellipticity determination is a complex and time consuming task that so far has evaded successful automation. We train a state of the art computer vision algorithm to identify craters in Lunar digital elevation maps and retrieve their sizes and 2D shapes. The computational backbone of the model is MaskRCNN, an instance segmentation general framework that detects craters in an image while simultaneously producing a mask for each crater that traces its outer rim. Our post-processing pipeline then finds the closest fitting ellipse to these masks, allowing us to retrieve the crater ellipticities. Our model is able to correctly identify 87% of known craters in the longitude range we hid from the network during training and validation (test set), while predicting thousands of additional craters not present in our training data. Manual validation of a subset of these new craters indicates that a majority of them are real, which we take as an indicator of the strength of our model in learning to identify craters, despite incomplete training data. The crater size, ellipticity, and depth distributions predicted by our model are consistent with human-generated results. The model allows us to perform a large scale search for differences in crater diameter and shape distributions between the lunar highlands and maria, and we exclude any such differences with a high statistical significance. The predicted test set catalogue and trained model are available here: https://github.com/malidib/Craters_MaskRCNN/.



قيم البحث

اقرأ أيضاً

Crater counting on the Moon and other bodies is crucial to constrain the dynamical history of the Solar System. This has traditionally been done by visual inspection of images, thus limiting the scope, efficiency, and/or accuracy of retrieval. In thi s paper we demonstrate the viability of using convolutional neural networks (CNNs) to determine the positions and sizes of craters from Lunar digital elevation maps (DEMs). We recover 92% of craters from the human-generated test set and almost double the total number of crater detections. Of these new craters, 15% are smaller in diameter than the minimum crater size in the ground-truth dataset. Our median fractional longitude, latitude and radius errors are 11% or less, representing good agreement with the human-generated datasets. From a manual inspection of 361 new craters we estimate the false positive rate of new craters to be 11%. Moreover, our Moon-trained CNN performs well when tested on DEM images of Mercury, detecting a large fraction of craters in each map. Our results suggest that deep learning will be a useful tool for rapidly and automatically extracting craters on various Solar System bodies. We make our code and data publicly available at https://github.com/silburt/DeepMoon.git and https://doi.org/10.5281/zenodo.1133969 .
One of the principal bottlenecks to atmosphere characterisation in the era of all-sky surveys is the availability of fast, autonomous and robust atmospheric retrieval methods. We present a new approach using unsupervised machine learning to generate informed priors for retrieval of exoplanetary atmosphere parameters from transmission spectra. We use principal component analysis (PCA) to efficiently compress the information content of a library of transmission spectra forward models generated using the PLATON package. We then apply a $k$-means clustering algorithm in PCA space to segregate the library into discrete classes. We show that our classifier is almost always able to instantaneously place a previously unseen spectrum into the correct class, for low-to-moderate spectral resolutions, $R$, in the range $R~=~30-300$ and noise levels up to $10$~per~cent of the peak-to-trough spectrum amplitude. The distribution of physical parameters for all members of the class therefore provides an informed prior for standard retrieval methods such as nested sampling. We benchmark our informed-prior approach against a standard uniform-prior nested sampler, finding that our approach is up to a factor two faster, with negligible reduction in accuracy. We demonstrate the application of this method to existing and near-future observatories, and show that it is suitable for real-world application. Our general approach is not specific to transmission spectroscopy and should be more widely applicable to cases that involve repetitive fitting of trusted high-dimensional models to large data catalogues, including beyond exoplanetary science.
Crater cataloging is an important yet time-consuming part of geological mapping. We present an automated Crater Detection Algorithm (CDA) that is competitive with expert-human researchers and hundreds of times faster. The CDA uses multiple neural net works to process digital terrain model and thermal infra-red imagery to identify and locate craters across the surface of Mars. We use additional post-processing filters to refine and remove potential false crater detections, improving our precision and recall by 10% compared to Lee (2019). We now find 80% of known craters above 3km in diameter, and identify 7,000 potentially new craters (13% of the identified craters). The median differences between our catalog and other independent catalogs is 2-4% in location and diameter, in-line with other inter-catalog comparisons. The CDA has been used to process global terrain maps and infra-red imagery for Mars, and the software and generated global catalog are available at https://doi.org/10.5683/SP2/CFUNII.
Over the past decade, the study of extrasolar planets has evolved rapidly from plain detection and identification to comprehensive categorization and characterization of exoplanet systems and their atmospheres. Atmospheric retrieval, the inverse mode ling technique used to determine an exoplanetary atmospheres temperature structure and composition from an observed spectrum, is both time-consuming and compute-intensive, requiring complex algorithms that compare thousands to millions of atmospheric models to the observational data to find the most probable values and associated uncertainties for each model parameter. For rocky, terrestrial planets, the retrieved atmospheric composition can give insight into the surface fluxes of gaseous species necessary to maintain the stability of that atmosphere, which may in turn provide insight into the geological and/or biological processes active on the planet. These atmospheres contain many molecules, some of them biosignatures, spectral fingerprints indicative of biological activity, which will become observable with the next generation of telescopes. Runtimes of traditional retrieval models scale with the number of model parameters, so as more molecular species are considered, runtimes can become prohibitively long. Recent advances in machine learning (ML) and computer vision offer new ways to reduce the time to perform a retrieval by orders of magnitude, given a sufficient data set to train with. Here we present an ML-based retrieval framework called Intelligent exoplaNet Atmospheric RetrievAl (INARA) that consists of a Bayesian deep learning model for retrieval and a data set of 3,000,000 synthetic rocky exoplanetary spectra generated using the NASA Planetary Spectrum Generator. Our work represents the first ML retrieval model for rocky, terrestrial exoplanets and the first synthetic data set of terrestrial spectra generated at this scale.
We present a weakly supervised instance segmentation algorithm based on deep community learning with multiple tasks. This task is formulated as a combination of weakly supervised object detection and semantic segmentation, where individual objects of the same class are identified and segmented separately. We address this problem by designing a unified deep neural network architecture, which has a positive feedback loop of object detection with bounding box regression, instance mask generation, instance segmentation, and feature extraction. Each component of the network makes active interactions with others to improve accuracy, and the end-to-end trainability of our model makes our results more robust and reproducible. The proposed algorithm achieves state-of-the-art performance in the weakly supervised setting without any additional training such as Fast R-CNN and Mask R-CNN on the standard benchmark dataset. The implementation of our algorithm is available on the project webpage: https://cv.snu.ac.kr/research/WSIS_CL.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا