ترغب بنشر مسار تعليمي؟ اضغط هنا

The LOFAR Known Pulsar Data Pipeline

103   0   0.0 ( 0 )
 نشر من قبل Anastasia Alexov
 تاريخ النشر 2010
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Transient radio phenomena and pulsars are one of six LOFAR Key Science Projects (KSPs). As part of the Transients KSP, the Pulsar Working Group (PWG) has been developing the LOFAR Pulsar Data Pipelines to both study known pulsars as well as search for new ones. The pipelines are being developed for the Blue Gene/P (BG/P) supercomputer and a large Linux cluster in order to utilize enormous amounts of computational capabilities (50Tflops) to process data streams of up to 23TB/hour. The LOFAR pipeline output will be using the Hierarchical Data Format 5 (HDF5) to efficiently store large amounts of numerical data, and to manage complex data encompassing a variety of data types, across distributed storage and processing architectures. We present the LOFAR Known Pulsar Data Pipeline overview, the pulsar beam-formed data format, the status of the pipeline processing as well as our future plans for developing the LOFAR Pulsar Search Pipeline. These LOFAR pipelines and software tools are being developed as the next generation toolset for pulsar processing in Radio Astronomy.



قيم البحث

اقرأ أيضاً

440 - George Heald 2010
One of the science drivers of the new Low Frequency Array (LOFAR) is large-area surveys of the low-frequency radio sky. Realizing this goal requires automated processing of the interferometric data, such that fully calibrated images are produced by t he system during survey operations. The LOFAR Imaging Pipeline is the tool intended for this purpose, and is now undergoing significant commissioning work. The pipeline is now functional as an automated processing chain. Here we present several recent LOFAR images that have been produced during the still ongoing commissioning period. These early LOFAR images are representative of some of the science goals of the commissioning team members.
We introduce pinta, a pipeline for reducing the upgraded Giant Metre-wave Radio Telescope (uGMRT) raw pulsar timing data, developed for the Indian Pulsar Timing Array experiment. We provide a detailed description of the workflow and usage of pinta, a s well as its computational performance and RFI mitigation characteristics. We also discuss a novel and independent determination of the relative time offsets between the different back-end modes of uGMRT and the interpretation of the uGMRT observation frequency settings, and their agreement with results obtained from engineering tests. Further, we demonstrate the capability of pinta to generate data products which can produce high-precision TOAs using PSR J1909-3744 as an example. These results are crucial for performing precision pulsar timing with the uGMRT.
The SOXS is a dual-arm spectrograph (UV-VIS & NIR) and AC due to mounted on the ESO 3.6m NTT in La Silla. Designed to simultaneously cover the optical and NIR wavelength range from 350-2050 nm, the instrument will be dedicated to the study of transie nt and variable events with many Target of Opportunity requests expected. The goal of the SOXS Data Reduction pipeline is to use calibration data to remove all instrument signatures from the SOXS scientific data frames for each of the supported instrument modes, convert this data into physical units and deliver them with their associated error bars to the ESO SAF as Phase 3 compliant science data products, all within 30 minutes. The primary reduced product will be a detrended, wavelength and flux calibrated, telluric corrected 1D spectrum with UV-VIS + NIR arms stitched together. The pipeline will also generate QC metrics to monitor telescope, instrument and detector health. The pipeline is written in Python 3 and has been built with an agile development philosophy that includes adaptive planning and evolutionary development. The pipeline is to be used by the SOXS consortium and the general user community that may want to perform tailored processing of SOXS data. Test driven development has been used throughout the build using `extreme mock data. We aim for the pipeline to be easy to install and extensively and clearly documented.
376 - Qiuyu Yu , Zhichen Pan , Lei Qian 2019
We developed a pulsar search pipeline based on PRESTO (PulsaR Exploration and Search Toolkit). This pipeline simply runs dedispersion, FFT (Fast Fourier Transformation), and acceleration search in process-level parallel to shorten the processing time . With two parallel strategies, the pipeline can highly shorten the processing time in both the normal searches or acceleration searches. This pipeline was first tested with PMPS (Parkes Multibeam Pulsar Survery) data and discovered two new faint pulsars. Then, it was successfully used in processing the FAST (Five-hundred-meter Aperture Spherical radio Telescope) drift scan data with tens of new pulsar discoveries up to now. The pipeline is only CPU-based and can be easily and quickly deployed in computing nodes for testing purposes or data processes.
114 - D. N. Friedel 2013
The Combined Array for Millimeter-wave Astronomy (CARMA) data reduction pipeline (CADRE) has been developed to give investigators a first look at a fully reduced set of their data. It runs automatically on all data produced by the telescope as they a rrive in the CARMA data archive. CADRE is written in Python and uses Python wrappers for MIRIAD subroutines for direct access to the data. It goes through the typical reduction procedures for radio telescope array data and produces a set of continuum and spectral line maps in both MIRIAD and FITS format. CADRE has been in production for nearly two years and this paper presents the current capabilities and planned development.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا