Do you want to publish a course? Click here

The PAU Survey: Background light estimation with deep learning techniques

128   0   0.0 ( 0 )
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

In any imaging survey, measuring accurately the astronomical background light is crucial to obtain good photometry. This paper introduces BKGnet, a deep neural network to predict the background and its associated error. BKGnet has been developed for data from the Physics of the Accelerating Universe Survey (PAUS), an imaging survey using a 40 narrow-band filter camera (PAUCam). Images obtained with PAUCam are affected by scattered light: an optical effect consisting of light multiply that deposits energy in specific detector regions contaminating the science measurements. Fortunately, scattered light is not a random effect, but it can be predicted and corrected for. We have found that BKGnet background predictions are very robust to distorting effects, while still being statistically accurate. On average, the use of BKGnet improves the photometric flux measurements by 7% and up to 20% at the bright end. BKGnet also removes a systematic trend in the background error estimation with magnitude in the i-band that is present with the current PAU data management method. With BKGnet, we reduce the photometric redshift outlier rate



rate research

Read More

With the dramatic rise in high-quality galaxy data expected from Euclid and Vera C. Rubin Observatory, there will be increasing demand for fast high-precision methods for measuring galaxy fluxes. These will be essential for inferring the redshifts of the galaxies. In this paper, we introduce Lumos, a deep learning method to measure photometry from galaxy images. Lumos builds on BKGnet, an algorithm to predict the background and its associated error, and predicts the background-subtracted flux probability density function. We have developed Lumos for data from the Physics of the Accelerating Universe Survey (PAUS), an imaging survey using 40 narrow-band filter camera (PAUCam). PAUCam images are affected by scattered light, displaying a background noise pattern that can be predicted and corrected for. On average, Lumos increases the SNR of the observations by a factor of 2 compared to an aperture photometry algorithm. It also incorporates other advantages like robustness towards distorting artifacts, e.g. cosmic rays or scattered light, the ability of deblending and less sensitivity to uncertainties in the galaxy profile parameters used to infer the photometry. Indeed, the number of flagged photometry outlier observations is reduced from 10% to 2%, comparing to aperture photometry. Furthermore, with Lumos photometry, the photo-z scatter is reduced by ~10% with the Deepz machine learning photo-z code and the photo-z outlier rate by 20%. The photo-z improvement is lower than expected from the SNR increment, however currently the photometric calibration and outliers in the photometry seem to be its limiting factor.
Classification of stars and galaxies is a well-known astronomical problem that has been treated using different approaches, most of them relying on morphological information. In this paper, we tackle this issue using the low-resolution spectra from narrow band photometry, provided by the PAUS (Physics of the Accelerating Universe) survey. We find that, with the photometric fluxes from the 40 narrow band filters and without including morphological information, it is possible to separate stars and galaxies to very high precision, 98.4% purity with a completeness of 98.8% for objects brighter than I = 22.5. This precision is obtained with a Convolutional Neural Network as a classification algorithm, applied to the objects spectra. We have also applied the method to the ALHAMBRA photometric survey and we provide an updated classification for its Gold sample.
Estimating depth from RGB images is a long-standing ill-posed problem, which has been explored for decades by the computer vision, graphics, and machine learning communities. Among the existing techniques, stereo matching remains one of the most widely used in the literature due to its strong connection to the human binocular system. Traditionally, stereo-based depth estimation has been addressed through matching hand-crafted features across multiple images. Despite the extensive amount of research, these traditional techniques still suffer in the presence of highly textured areas, large uniform regions, and occlusions. Motivated by their growing success in solving various 2D and 3D vision problems, deep learning for stereo-based depth estimation has attracted growing interest from the community, with more than 150 papers published in this area between 2014 and 2019. This new generation of methods has demonstrated a significant leap in performance, enabling applications such as autonomous driving and augmented reality. In this article, we provide a comprehensive survey of this new and continuously growing field of research, summarize the most commonly used pipelines, and discuss their benefits and limitations. In retrospect of what has been achieved so far, we also conjecture what the future may hold for deep learning-based stereo for depth estimation research.
The Physics of the Accelerating Universe (PAU) Survey is an international project for the study of cosmological parameters associated with Dark Energy. PAUs 18-CCD camera (PAUCam), installed at the prime focus of the William Herschel Telescope at the Roque de los Muchachos Observatory (La Palma, Canary Islands), scans part of the northern sky, to collect low resolution spectral information of millions of galaxies with its unique set of 40 narrow-band filters in the optical range from 450 nm to 850 nm, and a set of 6 standard broad band filters. The PAU data management (PAUdm) team is in charge of treating the data, including data transfer from the observatory to the PAU Survey data center, hosted at Port dInformacio Cientifica (PIC). PAUdm is also in charge of the storage, data reduction and, finally, of making the results available to the scientific community. We describe the technical solutions adopted to cover different aspects of the PAU Survey data management, from the computing infrastructure to support the operations, to the software tools and web services for the data process orchestration and exploration. In particular we will focus on the PAU database, developed for the coordination of the different PAUdm tasks, and to preserve and guarantee the consistency of data and metadata.
86 - D. Nieto , T. Miener , A. Brill 2021
Arrays of imaging atmospheric Cherenkov telescopes (IACT) are superb instruments to probe the very-high-energy gamma-ray sky. This type of telescope focuses the Cherenkov light emitted from air showers, initiated by very-high-energy gamma rays and cosmic rays, onto the camera plane. Then, a fast camera digitizes the longitudinal development of the air shower, recording its spatial, temporal, and calorimetric information. The properties of the primary very-high-energy particle initiating the air shower can then be inferred from those images: the primary particle can be classified as a gamma ray or a cosmic ray and its energy and incoming direction can be estimated. This so-called full-event reconstruction, crucial to the sensitivity of the array to gamma rays, can be assisted by machine learning techniques. We present a deep-learning driven, full-event reconstruction applied to simulated IACT events using CTLearn. CTLearn is a Python package that includes modules for loading and manipulating IACT data and for running deep learning models with TensorFlow, using pixel-wise camera data as input.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا