Do you want to publish a course? Click here

An Automated Pipeline for the VST Data Log Analysis

60   0   0.0 ( 0 )
 Added by Salvatore Savarese
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The VST Telescope Control Software logs continuously detailed information about the telescope and instrument operations. Commands, telemetries, errors, weather conditions and anything may be relevant for the instrument maintenance and the identification of problem sources is regularly saved. All information are recorded in textual form. These log files are often examined individually by the observatory personnel for specific issues and for tackling problems raised during the night. Thus, only a minimal part of the information is normally used for daily maintenance. Nevertheless, the analysis of the archived information collected over a long time span can be exploited to reveal useful trends and statistics about the telescope, which would otherwise be overlooked. Given the large size of the archive, a manual inspection and handling of the logs is cumbersome. An automated tool with an adequate user interface has been developed to scrape specific entries within the log files, process the data and display it in a comprehensible way. This pipeline has been used to scan the information collected over 5 years of telescope activity.



rate research

Read More

Scattered light noise affects the sensitivity of gravitational waves detectors. The characterization of such noise is needed to mitigate it. The time-varying filter empirical mode decomposition algorithm is suitable for identifying signals with time-dependent frequency such as scattered light noise (or scattering). We present a fully automated pipeline based on the pytvfemd library, a python implementation of the tvf-EMD algorithm, to identify objects inducing scattering in the gravitational-wave channel with their motion. The pipeline application to LIGO Livingston O3 data shows that most scattering noise is due to the penultimate mass at the end of the X-arm of the detector (EXPUM) and with a motion in the micro-seismic frequency range.
A fully autonomous data reduction pipeline has been developed for FRODOSpec, an optical fibre-fed integral field spectrograph currently in use at the Liverpool Telescope. This paper details the process required for the reduction of data taken using an integral field spectrograph and presents an overview of the computational methods implemented to create the pipeline. Analysis of errors and possible future enhancements are also discussed.
We present SoFiA 2, the fully automated 3D source finding pipeline for the WALLABY extragalactic HI survey with the Australian SKA Pathfinder (ASKAP). SoFiA 2 is a reimplementation of parts of the original SoFiA pipeline in the C programming language and makes use of OpenMP for multi-threading of the most time-critical algorithms. In addition, we have developed a parallel framework called SoFiA-X that allows the processing of large data cubes to be split across multiple computing nodes. As a result of these efforts, SoFiA 2 is substantially faster and comes with a much reduced memory footprint compared to its predecessor, thus allowing the large WALLABY data volumes of hundreds of gigabytes of imaging data per epoch to be processed in real-time. The source code has been made publicly available to the entire community under an open-source licence. Performance tests using mock galaxies injected into genuine ASKAP data suggest that in the absence of significant imaging artefacts SoFiA 2 is capable of achieving near-100% completeness and reliability above an integrated signal-to-noise ratio of about 5-6. We also demonstrate that SoFiA 2 generally recovers the location, integrated flux and w20 line width of galaxies with high accuracy. Other parameters, including the peak flux density and w50 line width, are more strongly biased due to the influence of the noise on the measurement. In addition, very faint galaxies below an integrated signal-to-noise ratio of about 10 may get broken up into multiple components, thus requiring a strategy to identify fragmented sources and ensure that they do not affect the integrity of any scientific analysis based on the SoFiA 2 output.
We present a new method to discriminate periodic from non-periodic irregularly sampled lightcurves. We introduce a periodic kernel and maximize a similarity measure derived from information theory to estimate the periods and a discriminator factor. We tested the method on a dataset containing 100,000 synthetic periodic and non-periodic lightcurves with various periods, amplitudes and shapes generated using a multivariate generative model. We correctly identified periodic and non-periodic lightcurves with a completeness of 90% and a precision of 95%, for lightcurves with a signal-to-noise ratio (SNR) larger than 0.5. We characterize the efficiency and reliability of the model using these synthetic lightcurves and applied the method on the EROS-2 dataset. A crucial consideration is the speed at which the method can be executed. Using hierarchical search and some simplification on the parameter search we were able to analyze 32.8 million lightcurves in 18 hours on a cluster of GPGPUs. Using the sensitivity analysis on the synthetic dataset, we infer that 0.42% in the LMC and 0.61% in the SMC of the sources show periodic behavior. The training set, the catalogs and source code are all available in http://timemachine.iic.harvard.edu.
The primary task of the 1.26-m telescope jointly operated by the National Astronomical Observatory and Guangzhou University is photometric observations of the g, r, and i bands. A data processing pipeline system was set up with mature software packages, such as IRAF, SExtractor, and SCAMP, to process approximately 5 GB of observational data automatically every day. However, the success ratio was significantly reduced when processing blurred images owing to telescope tracking error; this, in turn, significantly constrained the output of the telescope. We propose a robust automated photometric pipeline (RAPP) software that can correctly process blurred images. Two key techniques are presented in detail: blurred star enhancement and robust image matching. A series of tests proved that RAPP not only achieves a photometric success ratio and precision comparable to those of IRAF but also significantly reduces the data processing load and improves the efficiency.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا