ترغب بنشر مسار تعليمي؟ اضغط هنا

Pipeline Processing of VLBI Data

62   0   0.0 ( 0 )
 نشر من قبل Cormac Reynolds
 تاريخ النشر 2002
  مجال البحث فيزياء
والبحث باللغة English
 تأليف C. Reynolds




اسأل ChatGPT حول البحث

As part of an on-going effort to simplify the data analysis path for VLBI experiments, a pipeline procedure has been developed at JIVE to carry out much of the data reduction required for EVN experiments in an automated fashion. This pipeline procedure runs entirely within AIPS, the standard data reduction package used in astronomical VLBI, and is used to provide preliminary calibration of EVN experiments correlated at the EVN MkIV data processor. As well as simplifying the analysis for EVN users, the pipeline reduces the delay in providing information on the data quality to participating telescopes, hence improving the overall performance of the array. A description of this pipeline is presented here.

قيم البحث

اقرأ أيضاً

{Context}. The HIFI instrument on the Herschel Space Observatory performed over 9100 astronomical observations, almost 900 of which were calibration observations in the course of the nearly four-year Herschel mission. The data from each observation h ad to be converted from raw telemetry into calibrated products and were included in the Herschel Science Archive. {Aims}. The HIFI pipeline was designed to provide robust conversion from raw telemetry into calibrated data throughout all phases of the HIFI missions. Pre-launch laboratory testing was supported as were routine mission operations. {Methods}. A modular software design allowed components to be easily added, removed, amended and/or extended as the understanding of the HIFI data developed during and after mission operations. {Results}. The HIFI pipeline processed data from all HIFI observing modes within the Herschel automated processing environment as well as within an interactive environment. The same software can be used by the general astronomical community to reprocess any standard HIFI observation. The pipeline also recorded the consistency of processing results and provided automated quality reports. Many pipeline modules were in use since the HIFI pre-launch instrument level testing. {Conclusions}. Processing in steps facilitated data analysis to discover and address instrument artefacts and uncertainties. The availability of the same pipeline components from pre-launch throughout the mission made for well-understood, tested, and stable processing. A smooth transition from one phase to the next significantly enhanced processing reliability and robustness.
121 - Shifan Zuo , Jixia Li , Yichao Li 2020
The Tianlai project is a 21cm intensity mapping experiment aimed at detecting dark energy by measuring the baryon acoustic oscillation (BAO) features in the large scale structure power spectrum. This experiment provides an opportunity to test the dat a processing methods for cosmological 21cm signal extraction, which is still a great challenge in current radio astronomy research. The 21cm signal is much weaker than the foregrounds and easily affected by the imperfections in the instrumental responses. Furthermore, processing the large volumes of interferometer data poses a practical challenge. We have developed a data processing pipeline software called {tt tlpipe} to process the drift scan survey data from the Tianlai experiment. It performs offline data processing tasks such as radio frequency interference (RFI) flagging, array calibration, binning, and map-making, etc. It also includes utility functions needed for the data analysis, such as data selection, transformation, visualization and others. A number of new algorithms are implemented, for example the eigenvector decomposition method for array calibration and the Tikhonov regularization for $m$-mode analysis. In this paper we describe the design and implementation of the {tt tlpipe} and illustrate its functions with some analysis of real data. Finally, we outline directions for future development of this publicly code.
We describe the processing of the PHANGS-ALMA survey and present the PHANGS-ALMA pipeline, a public software package that processes calibrated interferometric and total power data into science-ready data products. PHANGS-ALMA is a large, high-resolut ion survey of CO J=2-1 emission from nearby galaxies. The observations combine ALMAs main 12-m array, the 7-m array, and total power observations and use mosaics of dozens to hundreds of individual pointings. We describe the processing of the u-v data, imaging and deconvolution, linear mosaicking, combining interferometer and total power data, noise estimation, masking, data product creation, and quality assurance. Our pipeline has a general design and can also be applied to VLA and ALMA observations of other spectral lines and continuum emission. We highlight our recipe for deconvolution of complex spectral line observations, which combines multiscale clean, single scale clean, and automatic mask generation in a way that appears robust and effective. We also emphasize our two-track approach to masking and data product creation. We construct one set of broadly masked data products, which have high completeness but significant contamination by noise, and another set of strictly masked data products, which have high confidence but exclude faint, low signal-to-noise emission. Our quality assurance tests, supported by simulations, demonstrate that 12-m+7-m deconvolved data recover a total flux that is significantly closer to the total power flux than the 7-m deconvolved data alone. In the appendices, we measure the stability of the ALMA total power calibration in PHANGS--ALMA and test the performance of popular short-spacing correction algorithms.
67 - M. Janssen , C. Goddi , H. Falcke 2019
Currently, HOPS and AIPS are the primary choices for the time-consuming process of (millimeter) Very Long Baseline Interferometry (VLBI) data calibration. However, for a full end-to-end pipeline, they either lack the ability to perform easily scripta ble incremental calibration or do not provide full control over the workflow with the ability to manipulate and edit calibration solutions directly. The Common Astronomy Software Application (CASA) offers all these abilities, together with a secure development future and an intuitive Python interface, which is very attractive for young radio astronomers. Inspired by the recent addition of a global fringe-fitter, the capability to convert FITS-IDI files to measurement sets, and amplitude calibration routines based on ANTAB metadata, we have developed the the CASA-based Radboud PIpeline for the Calibration of high Angular Resolution Data (rPICARD). The pipeline will be able to handle data from multiple arrays: EHT, GMVA, VLBA and the EVN in the first release. Polarization and phase-referencing calibration are supported and a spectral line mode will be added in the future. The large bandwidths of future radio observatories ask for a scalable reduction software. Within CASA, a message passing interface (MPI) implementation is used for parallelization, reducing the total time needed for processing. The most significant gain is obtained for the time-consuming fringe-fitting task where each scan be processed in parallel.
Processing of raw data from modern astronomical instruments is nowadays often carried out using dedicated software, so-called pipelines which are largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi U nit Spectroscopic Explorer (MUSE) integral field spectrograph operated at ESOs Paranal observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software, a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data gets transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا