Do you want to publish a course? Click here

A fully automated data reduction pipeline for the FRODOSpec integral field spectrograph

229   0   0.0 ( 0 )
 Added by Rob Barnsley Mr
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

A fully autonomous data reduction pipeline has been developed for FRODOSpec, an optical fibre-fed integral field spectrograph currently in use at the Liverpool Telescope. This paper details the process required for the reduction of data taken using an integral field spectrograph and presents an overview of the computational methods implemented to create the pipeline. Analysis of errors and possible future enhancements are also discussed.



rate research

Read More

Current time domain facilities are discovering hundreds of new galactic and extra-galactic transients every week. Classifying the ever-increasing number of transients is challenging, yet crucial to further our understanding of their nature, discover new classes, or ensuring sample purity, for instance, for Supernova Ia cosmology. The Zwicky Transient Facility is one example of such a survey. In addition, it has a dedicated very-low resolution spectrograph, the SEDMachine, operating on the Palomar 60-inch telescope. This spectrographs primary aim is object classification. In practice most, if not all, transients of interest brighter than ~19 mag are typed. This corresponds to approximately 10 to 15 targets a night. In this paper, we present a fully automated pipeline for the SEDMachine. This pipeline has been designed to be fast, robust, stable and extremely flexible. pysedm enables the fully automated spectral extraction of a targeted point source object in less than 5 minutes after the end of the exposure. The spectral color calibration is accurate at the few percent level. In the 19 weeks since pysedm entered production in early August of 2018, we have classified, among other objects, about 400 Type Ia supernovae and 140 Type II supernovae. We conclude that low resolution, fully automated spectrographs such as the `SEDMachine with pysedm installed on 2-m class telescopes within the southern hemisphere could allow us to automatically and simultaneously type and obtain a redshift for most (if not all) bright transients detected by LSST within z<0.2, notably potentially all Type Ia Supernovae. In comparison to the current SEDM design, this would require higher spectral resolution (R~1000) and slightly improved throughput. With this perspective in mind, pysedm has been designed to easily be adaptable to any IFU-like spectrograph (see https://github.com/MickaelRigault/pysedm).
OSIRIS is a near-infrared (1.0--2.4 $mu$m) integral field spectrograph operating behind the adaptive optics system at Keck Observatory, and is one of the first lenslet-based integral field spectrographs. Since its commissioning in 2005, it has been a productive instrument, producing nearly half the laser guide star adaptive optics (LGS AO) papers on Keck. The complexity of its raw data format necessitated a custom data reduction pipeline (DRP) delivered with the instrument in order to iteratively assign flux in overlapping spectra to the proper spatial and spectral locations in a data cube. Other than bug fixes and updates required for hardware upgrades, the bulk of the DRP has not been updated since initial instrument commissioning. We report on the first major comprehensive characterization of the DRP using on-sky and calibration data. We also detail improvements to the DRP including characterization of the flux assignment algorithm; exploration of spatial rippling in the reduced data cubes; and improvements to several calibration files, including the rectification matrix, the bad pixel mask, and the wavelength solution. We present lessons learned from over a decade of OSIRIS data reduction that are relevant to the next generation of integral field spectrograph hardware and data reduction software design.
149 - Neil Zimmerman 2011
Project 1640 is a high contrast near-infrared instrument probing the vicinities of nearby stars through the unique combination of an integral field spectrograph with a Lyot coronagraph and a high-order adaptive optics system. The extraordinary data reduction demands, similar those which several new exoplanet imaging instruments will face in the near future, have been met by the novel software algorithms described herein. The Project 1640 Data Cube Extraction Pipeline (PCXP) automates the translation of 3.8*10^4 closely packed, coarsely sampled spectra to a data cube. We implement a robust empirical model of the spectrograph focal plane geometry to register the detector image at sub-pixel precision, and map the cube extraction. We demonstrate our ability to accurately retrieve source spectra based on an observation of Saturns moon Titan.
We present the data reduction pipeline for CHARIS, a high-contrast integral-field spectrograph for the Subaru Telescope. The pipeline constructs a ramp from the raw reads using the measured nonlinear pixel response, and reconstructs the data cube using one of three extraction algorithms: aperture photometry, optimal extraction, or $chi^2$ fitting. We measure and apply both a detector flatfield and a lenslet flatfield and reconstruct the wavelength- and position-dependent lenslet point-spread function (PSF) from images taken with a tunable laser. We use these measured PSFs to implement a $chi^2$-based extraction of the data cube, with typical residuals of ~5% due to imperfect models of the undersampled lenslet PSFs. The full two-dimensional residual of the $chi^2$ extraction allows us to model and remove correlated read noise, dramatically improving CHARIS performance. The $chi^2$ extraction produces a data cube that has been deconvolved with the line-spread function, and never performs any interpolations of either the data or the individual lenslet spectra. The extracted data cube also includes uncertainties for each spatial and spectral measurement. CHARIS software is parallelized, written in Python and Cython, and freely available on github with a separate documentation page. Astrometric and spectrophotometric calibrations of the data cubes and PSF subtraction will be treated in a forthcoming paper.
We present the data reduction pipeline, MEAD, for Arizona Lenslets for Exoplanet Spectroscopy (ALES), the first thermal infrared integral field spectrograph designed for high-contrast imaging. ALES is an upgrade of LMIRCam, the $1-5,mu$m imaging camera for the Large Binocular Telescope, capable of observing astronomical objects in the thermal infrared ($3-5,mu$m) to produce simultaneous spatial and spectral data cubes. The pipeline is currently designed to perform $L$-band ($2.8-4.2,mu$m) data cube reconstruction, relying on methods used extensively by current near-infrared integral field spectrographs. ALES data cube reconstruction on each spectra uses an optimal extraction method. The calibration unit comprises a thermal infrared source, a monochromator and an optical diffuser designed to inject specific wavelengths of light into LBTI to evenly illuminate the pupil plane and ALES lenslet array with monochromatic light. Not only does the calibration unit facilitate wavelength calibration for ALES and LBTI, but it also provides images of monochromatic point spread functions (PSFs). A linear combination of these monochromatic PSFs can be optimized to fit each spectrum in the least-square sense via $chi^2$ fitting.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا