ترغب بنشر مسار تعليمي؟ اضغط هنا

PynPoint: a modular pipeline architecture for processing and analysis of high-contrast imaging data

299   0   0.0 ( 0 )
 نشر من قبل Tomas Stolker
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and PSF subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets. We aim at developing a generic and modular data reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and in particular well suitable for the 3-5 micron wavelength range where typically (ten) thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical. PynPoint is written in Python 2.7 and applies various image processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF subtraction tool based on PCA. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L and M data of beta Pic b and reassessed the planets brightness and position with an MCMC analysis, and we provide a derivation of the photometric error budget.



قيم البحث

اقرأ أيضاً

The objective of the SPHERE Data Center is to optimize the scientific return of SPHERE at the VLT, by providing optimized reduction procedures, services to users and publicly available reduced data. This paper describes our motivation, the implementa tion of the service (partners, infrastructure and developments), services, description of the on-line data, and future developments. The SPHERE Data Center is operational and has already provided reduced data with a good reactivity to many observers. The first public reduced data have been made available in 2017. The SPHERE Data Center is gathering a strong expertise on SPHERE data and is in a very good position to propose new reduced data in the future, as well as improved reduction procedures.
We describe Algorithms for Calibration, Optimized Registration, and Nulling the Star in Angular Differential Imaging (ACORNS-ADI), a new, parallelized software package to reduce high-contrast imaging data, and its application to data from the SEEDS s urvey. We implement several new algorithms, including a method to register saturated images, a trimmed mean for combining an image sequence that reduces noise by up to ~20%, and a robust and computationally fast method to compute the sensitivity of a high-contrast observation everywhere on the field-of-view without introducing artificial sources. We also include a description of image processing steps to remove electronic artifacts specific to Hawaii2-RG detectors like the one used for SEEDS, and a detailed analysis of the Locally Optimized Combination of Images (LOCI) algorithm commonly used to reduce high-contrast imaging data. ACORNS-ADI is written in python. It is efficient and open-source, and includes several optional features which may improve performance on data from other instruments. ACORNS-ADI requires minimal modification to reduce data from instruments other than HiCIAO. It is freely available for download at www.github.com/t-brandt/acorns-adi under a BSD license.
Combining high-contrast imaging with medium-resolution spectroscopy has been shown to significantly boost the direct detection of exoplanets. HARMONI, one of the first-light instruments to be mounted on ESOs ELT, will be equipped with a single-conjug ated adaptive optics system to reach the diffraction limit of the ELT in H and K bands, a high-contrast module dedicated to exoplanet imaging, and a medium-resolution (up to R = 17 000) optical and near-infrared integral field spectrograph. Combined together, these systems will provide unprecedented contrast limits at separations between 50 and 400 mas. In this paper, we estimate the capabilities of the HARMONI high-contrast module for the direct detection of young giant exoplanets. We use an end-to-end model of the instrument to simulate observations based on realistic observing scenarios and conditions. We analyze these data with the so-called molecule mapping technique combined to a matched-filter approach, in order to disentangle the companions from the host star and tellurics, and increase the S/N of the planetary signal. We detect planets above 5-sigma at contrasts up to 16 mag and separations down to 75 mas in several spectral configurations of the instrument. We show that molecule mapping allows the detection of companions up to 2.5 mag fainter compared to state-of-the-art high-contrast imaging techniques based on angular differential imaging. We also demonstrate that the performance is not strongly affected by the spectral type of the host star, and that we reach close sensitivities for the best three quartiles of observing conditions at Armazones, which means that HARMONI could be used in near-critical observations during 60 to 70% of telescope time at the ELT. Finally, we simulate planets from population synthesis models to further explore the parameter space that HARMONI and its high-contrast module will soon open.
72 - A. Vigan , C. Gry , G. Salter 2015
Sirius has always attracted a lot of scientific interest, especially after the discovery of a companion white dwarf at the end of the 19th century. Very early on, the existence of a potential third body was put forward to explain some of the observed properties of the system. We present new coronagraphic observations obtained with VLT/SPHERE that explore, for the very first time, the innermost regions of the system down to 0.2 (0.5 AU) from Sirius A. Our observations cover the near-infrared from 0.95 to 2.3 $mu$m and they offer the best on-sky contrast ever reached at these angular separations. After detailing the steps of our SPHERE/IRDIFS data analysis, we present a robust method to derive detection limits for multi-spectral data from high-contrast imagers and spectrographs. In terms of raw performance, we report contrasts of 14.3 mag at 0.2, ~16.3 mag in the 0.4-1.0 range and down to 19 mag at 3.7. In physical units, our observations are sensitive to giant planets down to 11 $M_{Jup}$ at 0.5 AU, 6-7 $M_{Jup}$ in the 1-2 AU range and ~4 $M_{Jup}$ at 10 AU. Despite the exceptional sensitivity of our observations, we do not report the detection of additional companions around Sirius A. Using a Monte Carlo orbital analysis, we show that we can reject, with about 50% probability, the existence of an 8 $M_{Jup}$ planet orbiting at 1 AU. In addition to the results presented in the paper, we provide our SPHERE/IFS data reduction pipeline at http://people.lam.fr/vigan.arthur/ under the MIT license.
We describe the processing of the PHANGS-ALMA survey and present the PHANGS-ALMA pipeline, a public software package that processes calibrated interferometric and total power data into science-ready data products. PHANGS-ALMA is a large, high-resolut ion survey of CO J=2-1 emission from nearby galaxies. The observations combine ALMAs main 12-m array, the 7-m array, and total power observations and use mosaics of dozens to hundreds of individual pointings. We describe the processing of the u-v data, imaging and deconvolution, linear mosaicking, combining interferometer and total power data, noise estimation, masking, data product creation, and quality assurance. Our pipeline has a general design and can also be applied to VLA and ALMA observations of other spectral lines and continuum emission. We highlight our recipe for deconvolution of complex spectral line observations, which combines multiscale clean, single scale clean, and automatic mask generation in a way that appears robust and effective. We also emphasize our two-track approach to masking and data product creation. We construct one set of broadly masked data products, which have high completeness but significant contamination by noise, and another set of strictly masked data products, which have high confidence but exclude faint, low signal-to-noise emission. Our quality assurance tests, supported by simulations, demonstrate that 12-m+7-m deconvolved data recover a total flux that is significantly closer to the total power flux than the 7-m deconvolved data alone. In the appendices, we measure the stability of the ALMA total power calibration in PHANGS--ALMA and test the performance of popular short-spacing correction algorithms.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا