ﻻ يوجد ملخص باللغة العربية
In order to assure a stable series of recorded images of sufficient quality for further scientific analysis, an objective image quality measure is required. Especially when dealing with ground-based observations, which are subject to varying seeing conditions and clouds, the quality assessment has to take multiple effects into account and provide information about the affected regions. In this study, we develop a deep learning method that is suited to identify anomalies and provide an image quality assessment of solar full-disk H$alpha$ filtergrams. The approach is based on the structural appearance and the true image distribution of high-quality observations. We employ a neural network with an encoder-decoder architecture to perform an identity transformation of selected high-quality observations. The encoder network is used to achieve a compressed representation of the input data, which is reconstructed to the original by the decoder. We use adversarial training to recover truncated information based on the high-quality image distribution. When images with reduced quality are transformed, the reconstruction of unknown features (e.g., clouds, contrails, partial occultation) shows deviations from the original. This difference is used to quantify the quality of the observations and to identify the affected regions. We apply our method to full-disk H$alpha$ filtergrams from Kanzelhohe Observatory recorded during 2012-2019 and demonstrate its capability to perform a reliable image quality assessment for various atmospheric conditions and instrumental effects, without the requirement of reference observations. Our quality metric achieves an accuracy of 98.5% in distinguishing observations with quality-degrading effects from clear observations and provides a continuous quality measure which is in good agreement with the human perception.
With Aperture synthesis (AS) technique, a number of small antennas can assemble to form a large telescope which spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna. Different from
In this work, we aim to learn an unpaired image enhancement model, which can enrich low-quality images with the characteristics of high-quality images provided by users. We propose a quality attention generative adversarial network (QAGAN) trained on
Inspired by the free-energy brain theory, which implies that human visual system (HVS) tends to reduce uncertainty and restore perceptual details upon seeing a distorted image, we propose restorative adversarial net (RAN), a GAN-based model for no-re
Image extension models have broad applications in image editing, computational photography and computer graphics. While image inpainting has been extensively studied in the literature, it is challenging to directly apply the state-of-the-art inpainti
Image generation has been heavily investigated in computer vision, where one core research challenge is to generate images from arbitrarily complex distributions with little supervision. Generative Adversarial Networks (GANs) as an implicit approach