ترغب بنشر مسار تعليمي؟ اضغط هنا

A comparative study of four significance measures for periodicity detection in astronomical surveys

126   0   0.0 ( 0 )
 نشر من قبل Maria S\\\"uveges Dr
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of periodicity detection in massive data sets of photometric or radial velocity time series, as presented by ESAs Gaia mission. Periodicity detection hinges on the estimation of the false alarm probability (FAP) of the extremum of the periodogram of the time series. We consider the problem of its estimation with two main issues in mind. First, for a given number of observations and signal-to-noise ratio, the rate of correct periodicity detections should be constant for all realized cadences of observations regardless of the observational time patterns, in order to avoid sky biases that are difficult to assess. Second, the computational loads should be kept feasible even for millions of time series. Using the Gaia case, we compare the $F^M$ method (Paltani 2004, Schwarzenberg-Czerny 2012), the Baluev method (Baluev 2008) and the GEV method (Suveges 2014), as well as a method for the direct estimation of a threshold. Three methods involve some unknown parameters, which are obtained by fitting a regression-type predictive model using easily obtainable covariates derived from observational time series. We conclude that the GEV and the Baluev methods both provide good solutions to the issues posed by a large-scale processing. The first of these yields the best scientific quality at the price of some moderately costly pre-processing. When this pre-processing is impossible for some reason (e.g. the computational costs are prohibitive or good regression models cannot be constructed), the Baluev method provides a computationally inexpensive alternative with slight biases in regions where time samplings exhibit strong aliases.

قيم البحث

اقرأ أيضاً

We present a comprehensive analysis of the performance of noise-reduction (``denoising) algorithms to determine whether they provide advantages in source detection on extragalactic survey images. The methods under analysis are Perona-Malik filtering, Bilateral filter, Total Variation denoising, Structure-texture image decomposition, Non-local means, Wavelets, and Block-matching. We tested the algorithms on simulated images of extragalactic fields with resolution and depth typical of the Hubble, Spitzer, and Euclid Space Telescopes, and of ground-based instruments. After choosing their best internal parameters configuration, we assess their performance as a function of resolution, background level, and image type, also testing their ability to preserve the objects fluxes and shapes. We analyze in terms of completeness and purity the catalogs extracted after applying denoising algorithms on a simulated Euclid Wide Survey VIS image, on real H160 (HST) and K-band (HAWK-I) observations of the CANDELS GOODS-South field. Denoising algorithms often outperform the standard approach of filtering with the Point Spread Function (PSF) of the image. Applying Structure-Texture image decomposition, Perona-Malik filtering, the Total Variation method by Chambolle, and Bilateral filtering on the Euclid-VIS image, we obtain catalogs that are both more pure and complete by 0.2 magnitudes than those based on the standard approach. The same result is achieved with the Structure-Texture image decomposition algorithm applied on the H160 image. The advantage of denoising techniques with respect to PSF filtering increases at increasing depth. Moreover, these techniques better preserve the shape of the detected objects with respect to PSF smoothing. Denoising algorithms provide significant improvements in the detection of faint objects and enhance the scientific return of current and future extragalactic surveys.
As we enter the era of large-scale imaging surveys with the up-coming telescopes such as LSST and SKA, it is envisaged that the number of known strong gravitational lensing systems will increase dramatically. However, these events are still very rare and require the efficient processing of millions of images. In order to tackle this image processing problem, we present Machine Learning techniques and apply them to the Gravitational Lens Finding Challenge. The Convolutional Neural Networks (CNNs) presented have been re-implemented within a new modular, and extendable framework, LEXACTUM. We report an Area Under the Curve (AUC) of 0.9343 and 0.9870, and an execution time of 0.0061s and 0.0594s per image, for the Space and Ground datasets respectively, showing that the results obtained by CNNs are very competitive with conventional methods (such as visual inspection and arc finders) for detecting gravitational lenses.
60 - S. Cavazzani , V. Zitelli 2012
In this paper we have evaluated the amount of available telescope time at four interesting sites for astronomical instrumentation. We use the GOES 12 data for the years 2008 and 2009. We use a homogeneous methodology presented in several previous pap ers to classify the nights as clear (completely cloud-free), mixed (partially cloud-covered), and covered. Additionally, for the clear nights, we have evaluated the amount of satellite stable nights which correspond to the amount of ground based photometric nights, and the clear nights corresponding to the spectroscopic nights. We have applied this model to two sites in the Northern Hemisphere (San Pedro Martir (SPM), Mexico; Izana, Canary Islands) and to two sites in the Southern Hemisphere (El Leoncito, Argentine; San Antonio de Los Cobres (SAC), Argentine). We have obtained, from the two years considered, a mean amount of cloud free nights of 68.6% at Izana, 76.0% at SPM, 70.6% at Leoncito and 70.0% at SAC. We have evaluated, among the cloud free nights, an amount of stable nights of 62.6% at Izana, 69.6% at SPM, 64.9% at Leoncito, and 59.7% at SAC.
Wide-angle surveys have been an engine for new discoveries throughout the modern history of astronomy, and have been among the most highly cited and scientifically productive observing facilities in recent years. This trend is likely to continue over the next decade, as many of the most important questions in astrophysics are best tackled with massive surveys, often in synergy with each other and in tandem with the more traditional observatories. We argue that these surveys are most productive and have the greatest impact when the data from the surveys are made public in a timely manner. The rise of the survey astronomer is a substantial change in the demographics of our field; one of the most important challenges of the next decade is to find ways to recognize the intellectual contributions of those who work on the infrastructure of surveys (hardware, software, survey planning and operations, and databases/data distribution), and to make career paths to allow them to thrive.
Astronomical images from optical photometric surveys are typically contaminated with transient artifacts such as cosmic rays, satellite trails and scattered light. We have developed and tested an algorithm that removes these artifacts using a deep, a rtifact free, static sky coadd image built up through the median combination of point spread function (PSF) homogenized, overlapping single epoch images. Transient artifacts are detected and masked in each single epoch image through comparison with an artifact free, PSF-matched simulated image that is constructed using the PSF-corrected, model fitting catalog from the artifact free coadd image together with the position variable PSF model of the single epoch image. This approach works well not only for cleaning single epoch images with worse seeing than the PSF homogenized coadd, but also the traditionally much more challenging problem of cleaning single epoch images with better seeing. In addition to masking transient artifacts, we have developed an interpolation approach that uses the local PSF and performs well in removing artifacts whose widths are smaller than the PSF full width at half maximum, including cosmic rays, the peaks of saturated stars and bleed trails. We have tested this algorithm on Dark Energy Survey Science Verification data and present performance metrics. More generally, our algorithm can be applied to any survey which images the same part of the sky multiple times.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا