ترغب بنشر مسار تعليمي؟ اضغط هنا

Reduction and analysis of MUSE data

120   0   0.0 ( 0 )
 نشر من قبل Johan Richard
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English
 تأليف J. Richard




اسأل ChatGPT حول البحث

MUSE, the Multi Unit Spectroscopic Explorer, is a 2nd generation integral-field spectrograph under final assembly to see first light at the Very Large Telescope in 2013. By capturing ~ 90000 optical spectra in a single exposure, MUSE represents a challenge for data reduction and analysis. We summarise here the main features of the Data Reduction System, as well as some of the tools under development by the MUSE consortium and the DAHLIA team to handle the large MUSE datacubes (about 4x?10^8 pixels) to recover the original astrophysical signal.

قيم البحث

اقرأ أيضاً

MUSE, a giant integral field spectrograph, is about to become the newest facility instrument at the VLT. It will see first light in February 2014. Here, we summarize the properties of the instrument as built and outline functionality of the data redu ction system, that transforms the raw data that gets recorded separately in 24 IFUs by 4k CCDs, into a fully calibrated, scientifically usable data cube. We then describe recent work regarding geometrical calibration of the instrument and testing of the processing pipeline, before concluding with results of the Preliminary Acceptance in Europe and an outlook to the on-sky commissioning.
MUSE (Multi Unit Spectroscopic Explorer) is an integral-field spectrograph mounted on the Very Large Telescope (VLT) in Chile and made available to the European community since October 2014. The Centre de Recherche Astrophysique de Lyon has developed a dedicated software to help MUSE users analyze the reduced data. In this paper we introduce MPDAF, the MUSE Python Data Analysis Framework, based on several well-known Python libraries (Numpy, Scipy, Matplotlib, Astropy) which offers new tools to manipulate MUSE-specific data. We present different examples showing how this Python package may be useful for MUSE data analysis.
The Polarimetric and Helioseismic Imager (PHI) is the first deep-space solar spectropolarimeter, on-board the Solar Orbiter (SO) space mission. It faces: stringent requirements on science data accuracy, a dynamic environment, and severe limitations o n telemetry volume. SO/PHI overcomes these restrictions through on-board instrument calibration and science data reduction, using dedicated firmware in FPGAs. This contribution analyses the accuracy of a data processing pipeline by comparing the results obtained with SO/PHI hardware to a reference from a ground computer. The results show that for the analysed pipeline the error introduced by the firmware implementation is well below the requirements of SO/PHI.
Multi-messenger astronomy is becoming the key to understanding the Universe from a comprehensive perspective. In most cases, the data and the technology are already in place, therefore it is important to provide an easily-accessible package that comb ines datasets from multiple telescopes at different wavelengths. In order to achieve this, we are working to produce a data analysis pipeline that allows the data reduction from different instruments without needing detailed knowledge of each observation. Ideally, the specifics of each observation are automatically dealt with, while the necessary information on how to handle the data in each case is provided by a tutorial that is included in the program. We first focus our project on the study of pulsars and their wind nebulae (PWNe) at radio and gamma-ray frequencies. In this way, we aim to combine time-domain and imaging datasets at two extremes of the electromagnetic spectrum. In addition, the emission has the same non-thermal origin in pulsars at radio and gamma-ray frequencies, and the population of electrons is believed to be the same at these energies in PWNe. The final goal of the project will be to unveil the properties of these objects by tracking their behaviour using all of the available multi-wavelength data.
Processing of raw data from modern astronomical instruments is nowadays often carried out using dedicated software, so-called pipelines which are largely run in automated operation. In this paper we describe the data reduction pipeline of the Multi U nit Spectroscopic Explorer (MUSE) integral field spectrograph operated at ESOs Paranal observatory. This spectrograph is a complex machine: it records data of 1152 separate spatial elements on detectors in its 24 integral field units. Efficiently handling such data requires sophisticated software, a high degree of automation and parallelization. We describe the algorithms of all processing steps that operate on calibrations and science data in detail, and explain how the raw science data gets transformed into calibrated datacubes. We finally check the quality of selected procedures and output data products, and demonstrate that the pipeline provides datacubes ready for scientific analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا