ترغب بنشر مسار تعليمي؟ اضغط هنا

Density Based Outlier Scoring on Kepler Data

52   0   0.0 ( 0 )
 نشر من قبل Daniel Giles
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In the present era of large scale surveys, big data presents new challenges to the discovery process for anomalous data. Such data can be indicative of systematic errors, extreme (or rare) forms of known phenomena, or most interestingly, truly novel phenomena which exhibit as-of-yet unobserved behaviors. In this work we present an outlier scoring methodology to identify and characterize the most promising unusual sources to facilitate discoveries of such anomalous data. We have developed a data mining method based on k-Nearest Neighbor distance in feature space to efficiently identify the most anomalous lightcurves. We test variations of this method including using principal components of the feature space, removing select features, the effect of the choice of k, and scoring to subset samples. We evaluate the peformance of our scoring on known object classes and find that our scoring consistently scores rare (<1000) object classes higher than common classes. We have applied scoring to all long cadence lightcurves of quarters 1 to 17 of Keplers prime mission and present outlier scores for all 2.8 million lightcurves for the roughly 200k objects.



قيم البحث

اقرأ أيضاً

Among the many challenges posed by the huge data volumes produced by the new generation of astronomical instruments there is also the search for rare and peculiar objects. Unsupervised outlier detection algorithms may provide a viable solution. In th is work we compare the performances of six methods: the Local Outlier Factor, Isolation Forest, k-means clustering, a measure of novelty, and both a normal and a convolutional autoencoder. These methods were applied to data extracted from SDSS stripe 82. After discussing the sensitivity of each method to its own set of hyperparameters, we combine the results from each method to rank the objects and produce a final list of outliers.
Sigma clipping is commonly used in astronomy for outlier rejection, but the number of standard deviations beyond which one should clip data from a sample ultimately depends on the size of the sample. Chauvenet rejection is one of the oldest, and simp lest, ways to account for this, but, like sigma clipping, depends on the samples mean and standard deviation, neither of which are robust quantities: Both are easily contaminated by the very outliers they are being used to reject. Many, more robust measures of central tendency, and of sample deviation, exist, but each has a tradeoff with precision. Here, we demonstrate that outlier rejection can be both very robust and very precise if decreasingly robust but increasingly precise techniques are applied in sequence. To this end, we present a variation on Chauvenet rejection that we call robust Chauvenet rejection (RCR), which uses three decreasingly robust/increasingly precise measures of central tendency, and four decreasingly robust/increasingly precise measures of sample deviation. We show this sequential approach to be very effective for a wide variety of contaminant types, even when a significant -- even dominant -- fraction of the sample is contaminated, and especially when the contaminants are strong. Furthermore, we have developed a bulk-rejection variant, to significantly decrease computing times, and RCR can be applied both to weighted data, and when fitting parameterized models to data. We present aperture photometry in a contaminated, crowded field as an example. RCR may be used by anyone at https://skynet.unc.edu/rcr, and source code is available there as well.
The Kepler Mission was launched on March 6, 2009 to perform a photometric survey of more than 100,000 dwarf stars to search for Earth-size planets with the transit technique. The reliability of the resulting planetary candidate list relies on the abi lity to identify and remove false positives. Major sources of astrophysical false positives are planetary transits and stellar eclipses on background stars. We describe several new techniques for the identification of background transit sources that are separated from their target stars, indicating an astrophysical false positive. These techniques use only Kepler photometric data. We describe the concepts and construction of these techniques in detail as well as their performance and relative merits.
The Kepler mission has provided a wealth of data, revealing new insights in time-domain astronomy. However, Keplers single band-pass has limited studies to a single wavelength. In this work we build a data-driven, pixel-level model for the Pixel Resp onse Function (PRF) of Kepler targets, modeling the image data from the spacecraft. Our model is sufficiently flexible to capture known detector effects, such as non-linearity, intra-pixel sensitivity variations, and focus change. In theory, the shape of the Kepler PRF should also be weakly wavelength dependent, due to optical chromatic aberration and wavelength dependent detector response functions. We are able to identify these predicted shape changes to the PRF using the residuals between Kepler data and our model. In this work, we show that these PRF changes correspond to wavelength variability in Kepler targets using a small sample of eclipsing binaries. Using our model, we demonstrate that pixel-level light curves of eclipsing binaries show variable eclipse depths, ellipsoidal modulation and limb darkening. These changes at the pixel level are consistent with multi-wavelength photometry. Our work suggests each pixel in the Kepler data of a single target has a different effective wavelength, ranging from $approx$ 550-750 $nm$. In this proof of concept, we demonstrate our model, and discuss possible use cases for the wavelength dependent Pixel Response Function of Kepler. These use cases include characterizing variable systems, and vetting exoplanet discoveries at the pixel level. The chromatic PRF of Kepler is due to weak wavelength dependence in the optical systems and detector of the telescope, and similar chromatic PRFs are expected in other similar telescopes, notably the NASA TESS telescope.
118 - S. Aigrain 2017
We present ARC2 (Astrophysically Robust Correction 2), an open-source Python-based systematics-correction pipeline to correct for the Kepler prime mission long cadence light curves. The ARC2 pipeline identifies and corrects any isolated discontinuiti es in the light curves, then removes trends common to many light curves. These trends are modelled using the publicly available co-trending basis vectors, within an (approximate) Bayesian framework with `shrinkage priors to minimise the risk of over-fitting and the injection of any additional noise into the corrected light curves, while keeping any astrophysical signals intact. We show that the ARC2 pipelines performance matches that of the standard Kepler PDC-MAP data products using standard noise metrics, and demonstrate its ability to preserve astrophysical signals using injection tests with simulated stellar rotation and planetary transit signals. Although it is not identical, the ARC2 pipeline can thus be used as an open source alternative to PDC-MAP, whenever the ability to model the impact of the systematics removal process on other kinds of signal is important.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا