ترغب بنشر مسار تعليمي؟ اضغط هنا

AutoSpec: Fast Automated Spectral Extraction Software for IFU Datacubes

71   0   0.0 ( 0 )
 نشر من قبل Alex Griffiths
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

With the ever growing popularity of integral field unit (IFU) spectroscopy, countless observations are being performed over multiple object systems such as blank fields and galaxy clusters. With this, an increasing amount of time is being spent extracting one dimensional object spectra from large three dimensional datacubes. However, a great deal of information available within these datacubes is overlooked in favor of photometrically based spatial information. Here we present a novel, yet simple approach of optimal source identification, utilizing the wealth of information available within an IFU datacube, rather than relying on ancillary imaging. Through the application of these techniques, we show that we are able to obtain object spectra comparable to deep photometry weighted extractions without the need for ancillary imaging. Further, implementing our custom designed algorithms can improve the signal-to-noise of extracted spectra and successfully deblend sources from nearby contaminants. This will be a critical tool for future IFU observations of blank and deep fields, especially over large areas where automation is necessary. We implement these techniques into the Python based spectral extraction software, AutoSpec which is available via GitHub at: https://github.com/a-griffiths/AutoSpec and Zenodo at: https://doi.org/10.5281/zenodo.1305848



قيم البحث

اقرأ أيضاً

VIMOS main distinguishing characteristic is its very high multiplex capability: in MOS mode up to 800 spectra can be acquired simultaneously, while the Integral Field Unit produces 6400 spectra to obtain integral field spectroscopy of an area approxi mately 1x1 arcmin in size. To successfully exploit the capabilities of such an instrument it is necessary to expedite as much as possible the analysis of the very large volume of data that it will produce, automating almost completely the basic data reduction and the related bookkeeping process. The VIMOS Data Reduction Software (DRS) has been designed specifically to satisfy these two requirements. A complete automation is achieved using a series of auxiliary tables that store all the input information needed by the data reduction procedures, and all the output information that they produce. We expect to achieve a satisfactory data reduction for more than 90% of the input spectra, while some level of human intervention might be required for a small fraction of them to complete the data reduction. The DRS procedures can be used as a stand-alone package, but are also being incorporated within the VIMOS pipeline under development at the European Southern Observatory.
In a companion paper we have presented many products derived from the application of the spectral synthesis code STARLIGHT to datacubes from the CALIFA survey, including 2D maps of stellar population properties and 1D averages in the temporal and spa tial dimensions. Here we evaluate the uncertainties in these products. Uncertainties due to noise and spectral shape calibration errors and to the synthesis method are investigated by means of a suite of simulations based on 1638 CALIFA spectra for NGC 2916, with perturbations amplitudes gauged in terms of the expected errors. A separate study was conducted to assess uncertainties related to the choice of evolutionary synthesis models. We compare results obtained with the Bruzual & Charlot models, a preliminary update of them, and a combination of spectra derived from the Granada and MILES models. About 100k CALIFA spectra are used in this comparison. Noise and shape-related errors at the level expected for CALIFA propagate to 0.10-0.15 dex uncertainties in stellar masses, mean ages and metallicities. Uncertainties in A_V increase from 0.06 mag in the case of random noise to 0.16 mag for shape errors. Higher order products such as SFHs are more uncertain, but still relatively stable. Due to the large number statistics of datacubes, spatial averaging reduces uncertainties while preserving information on the history and structure of stellar populations. Radial profiles of global properties, as well as SFHs averaged over different regions are much more stable than for individual spaxels. Uncertainties related to the choice of base models are larger than those associated with data and method. Differences in mean age, mass and metallicity are ~ 0.15 to 0.25 dex, and 0.1 mag in A_V. Spectral residuals are ~ 1% on average, but with systematic features of up to 4%. The origin of these features is discussed. (Abridged)
90 - F. Tarsitano 2021
In this work we explore the possibility of applying machine learning methods designed for one-dimensional problems to the task of galaxy image classification. The algorithms used for image classification typically rely on multiple costly steps, such as the Point Spread Function (PSF) deconvolution and the training and application of complex Convolutional Neural Networks (CNN) of thousands or even millions of parameters. In our approach, we extract features from the galaxy images by analysing the elliptical isophotes in their light distribution and collect the information in a sequence. The sequences obtained with this method present definite features allowing a direct distinction between galaxy types, as opposed to smooth Sersic profiles. Then, we train and classify the sequences with machine learning algorithms, designed through the platform Modulos AutoML, and study how they optimize the classification task. As a demonstration of this method, we use the second public release of the Dark Energy Survey (DES DR2). We show that by applying it to this sample we are able to successfully distinguish between early-type and late-type galaxies, for images with signal-to-noise ratio greater then 300. This yields an accuracy of $86%$ for the early-type galaxies and $93%$ for the late-type galaxies, which is on par with most contemporary automated image classification approaches. Our novel method allows for galaxy images to be accurately classified and is faster than other approaches. Data dimensionality reduction also implies a significant lowering in computational cost. In the perspective of future data sets obtained with e.g. Euclid and the Vera Rubin Observatory (VRO), this work represents a path towards using a well-tested and widely used platform from industry in efficiently tackling galaxy classification problems at the peta-byte scale.
We present a performance test of the Point Spread Function deconvolution algorithm applied to astronomical Integral Field Unit (IFU) Spectroscopy data for restoration of galaxy kinematics. We deconvolve the IFU data by applying the Lucy-Richardson al gorithm to the 2D image slice at each wavelength. We demonstrate that the algorithm can effectively recover the true stellar kinematics of the galaxy, by using mock IFU data with diverse combination of surface brightness profile, S/N, line-of-sight geometry and Line-Of-Sight Velocity Distribution (LOSVD). In addition, we show that the proxy of the spin parameter $lambda_{R_{e}}$ can be accurately measured from the deconvolved IFU data. We apply the deconvolution algorithm to the actual SDSS-IV MaNGA IFU survey data. The 2D LOSVD, geometry and $lambda_{R_{e}}$ measured from the deconvolved MaNGA IFU data exhibit noticeable difference compared to the ones measured from the original IFU data. The method can be applied to any other regular-grid IFU data to extract the PSF-deconvolved spatial information.
We present the Umbrella software suite for asteroid detection, validation, identification and reporting. The current core of Umbrella is an open-source modular library, called Umbrella2, that includes algorithms and interfaces for all steps of the pr ocessing pipeline, including a novel detection algorithm for faint trails. Building on the library, we have also implemented a detection pipeline accessible both as a desktop program (ViaNearby) and via a web server (Webrella), which we have successfully used in near real-time data reduction of a few asteroid surveys on the Wide Field Camera of the Isaac Newton Telescope. In this paper we describe the library, focusing on the interfaces and algorithms available, and we present the results obtained with the desktop version on a set of well-curated fields used by the EURONEAR project as an asteroid detection benchmark.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا