ترغب بنشر مسار تعليمي؟ اضغط هنا

Unsupervised machine learning for transient discovery in Deeper, Wider, Faster light curves

121   0   0.0 ( 0 )
 نشر من قبل Sara Webb
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Identification of anomalous light curves within time-domain surveys is often challenging. In addition, with the growing number of wide-field surveys and the volume of data produced exceeding astronomers ability for manual evaluation, outlier and anomaly detection is becoming vital for transient science. We present an unsupervised method for transient discovery using a clustering technique and the Astronomaly package. As proof of concept, we evaluate 85553 minute-cadenced light curves collected over two 1.5 hour periods as part of the Deeper, Wider, Faster program, using two different telescope dithering strategies. By combining the clustering technique HDBSCAN with the isolation forest anomaly detection algorithm via the visual interface of Astronomaly, we are able to rapidly isolate anomalous sources for further analysis. We successfully recover the known variable sources, across a range of catalogues from within the fields, and find a further 7 uncatalogued variables and two stellar flare events, including a rarely observed ultra fast flare (5 minute) from a likely M-dwarf.



قيم البحث

اقرأ أيضاً

We present the Deeper Wider Faster (DWF) program that coordinates more than 30 multi-wavelength and multi-messenger facilities worldwide and in space to detect and study fast transients (millisecond-to-hours duration). DWF has four main components, ( 1) simultaneous observations, where about 10 major facilities, from radio to gamma-ray, are coordinated to perform deep, wide-field, fast-cadenced observations of the same field at the same time. Radio telescopes search for fast radio bursts while optical imagers and high-energy instruments search for seconds-to-hours timescale transient events, (2) real-time (seconds to minutes) supercomputer data processing and candidate identification, along with real-time (minutes) human inspection of candidates using sophisticated visualisation technology, (3) rapid-response (minutes) follow-up spectroscopy and imaging and conventional ToO observations, and (4) long-term follow up with a global network of 1-4m-class telescopes. The principal goals of DWF are to discover and study counterparts to fast radio bursts and gravitational wave events, along with millisecond-to-hour duration transients at all wavelengths.
We present our 500 pc distance-limited study of stellar fares using the Dark Energy Camera as part of the Deeper, Wider, Faster Program. The data was collected via continuous 20-second cadence g band imaging and we identify 19,914 sources with precis e distances from Gaia DR2 within twelve, ~3 square-degree, fields over a range of Galactic latitudes. An average of ~74 minutes is spent on each field per visit. All light curves were accessed through a novel unsupervised machine learning technique designed for anomaly detection. We identify 96 flare events occurring across 80 stars, the majority of which are M dwarfs. Integrated are energies range from $sim 10^{31}-10^{37}$ erg, with a proportional relationship existing between increased are energy with increased distance from the Galactic plane, representative of stellar age leading to declining yet more energetic are events. In agreement with previous studies we observe an increase in flaring fraction from M0 -> M6 spectral types. Furthermore, we find a decrease in the flaring fraction of stars as vertical distance from the galactic plane is increased, with a steep decline present around ~100 pc. We find that ~70% of identified flares occur on short timescales of ~8 minutes. Finally we present our associated are rates, finding a volumetric rate of $2.9 pm 0.3 times 10^{-6}$ flares pc$^{-3}$ hr$^{-1}$.
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of ~32000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20x20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25% of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1%, the classifier initially suggests a missed detection rate of around 10%. However we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6%.
Machine Learning algorithms are good tools for both classification and prediction purposes. These algorithms can further be used for scientific discoveries from the enormous data being collected in our era. We present ways of discovering and understa nding astronomical phenomena by applying machine learning algorithms to data collected with radio telescopes. We discuss the use of supervised machine learning algorithms to predict the free parameters of star formation histories and also better understand the relations between the different input and output parameters. We made use of Deep Learning to capture the non-linearity in the parameters. Our models are able to predict with low error rates and give the advantage of predicting in real time once the model has been trained. The other class of machine learning algorithms viz. unsupervised learning can prove to be very useful in finding patterns in the data. We explore how we use such unsupervised techniques on solar radio data to identify patterns and variations, and also link such findings to theories, which help to better understand the nature of the system being studied. We highlight the challenges faced in terms of data size, availability, features, processing ability and importantly, the interpretability of results. As our ability to capture and store data increases, increased use of machine learning to understand the underlying physics in the information captured seems inevitable.
123 - Fuzhao Xue , Ziji Shi , Futao Wei 2021
More transformer blocks with residual connections have recently achieved impressive results on various tasks. To achieve better performance with fewer trainable parameters, recent methods are proposed to go shallower by parameter sharing or model com pressing along with the depth. However, weak modeling capacity limits their performance. Contrastively, going wider by inducing more trainable matrixes and parameters would produce a huge model requiring advanced parallelism to train and inference. In this paper, we propose a parameter-efficient framework, going wider instead of deeper. Specially, following existing works, we adapt parameter sharing to compress along depth. But, such deployment would limit the performance. To maximize modeling capacity, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across transformer blocks, instead of sharing normalization layers, we propose to use individual layernorms to transform various semantic representations in a more parameter-efficient way. To evaluate our plug-and-run framework, we design WideNet and conduct comprehensive experiments on popular computer vision and natural language processing benchmarks. On ImageNet-1K, our best model outperforms Vision Transformer (ViT) by $1.5%$ with $0.72 times$ trainable parameters. Using $0.46 times$ and $0.13 times$ parameters, our WideNet can still surpass ViT and ViT-MoE by $0.8%$ and $2.1%$, respectively. On four natural language processing datasets, WideNet outperforms ALBERT by $1.8%$ on average and surpass BERT using factorized embedding parameterization by $0.8%$ with fewer parameters.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا