ترغب بنشر مسار تعليمي؟ اضغط هنا

Image Quality Assessment for Full-Disk Solar Observations with Generative Adversarial Networks

190   0   0.0 ( 0 )
 نشر من قبل Robert Jarolim
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

In order to assure a stable series of recorded images of sufficient quality for further scientific analysis, an objective image quality measure is required. Especially when dealing with ground-based observations, which are subject to varying seeing conditions and clouds, the quality assessment has to take multiple effects into account and provide information about the affected regions. In this study, we develop a deep learning method that is suited to identify anomalies and provide an image quality assessment of solar full-disk H$alpha$ filtergrams. The approach is based on the structural appearance and the true image distribution of high-quality observations. We employ a neural network with an encoder-decoder architecture to perform an identity transformation of selected high-quality observations. The encoder network is used to achieve a compressed representation of the input data, which is reconstructed to the original by the decoder. We use adversarial training to recover truncated information based on the high-quality image distribution. When images with reduced quality are transformed, the reconstruction of unknown features (e.g., clouds, contrails, partial occultation) shows deviations from the original. This difference is used to quantify the quality of the observations and to identify the affected regions. We apply our method to full-disk H$alpha$ filtergrams from Kanzelhohe Observatory recorded during 2012-2019 and demonstrate its capability to perform a reliable image quality assessment for various atmospheric conditions and instrumental effects, without the requirement of reference observations. Our quality metric achieves an accuracy of 98.5% in distinguishing observations with quality-degrading effects from clear observations and provides a continuous quality measure which is in good agreement with the human perception.



قيم البحث

اقرأ أيضاً

92 - Long Xu , Wenqing Sun , Yihua Yan 2020
With Aperture synthesis (AS) technique, a number of small antennas can assemble to form a large telescope which spatial resolution is determined by the distance of two farthest antennas instead of the diameter of a single-dish antenna. Different from direct imaging system, an AS telescope captures the Fourier coefficients of a spatial object, and then implement inverse Fourier transform to reconstruct the spatial image. Due to the limited number of antennas, the Fourier coefficients are extremely sparse in practice, resulting in a very blurry image. To remove/reduce blur, CLEAN deconvolution was widely used in the literature. However, it was initially designed for point source. For extended source, like the sun, its efficiency is unsatisfied. In this study, a deep neural network, referring to Generative Adversarial Network (GAN), is proposed for solar image deconvolution. The experimental results demonstrate that the proposed model is markedly better than traditional CLEAN on solar images.
In this work, we aim to learn an unpaired image enhancement model, which can enrich low-quality images with the characteristics of high-quality images provided by users. We propose a quality attention generative adversarial network (QAGAN) trained on unpaired data based on the bidirectional Generative Adversarial Network (GAN) embedded with a quality attention module (QAM). The key novelty of the proposed QAGAN lies in the injected QAM for the generator such that it learns domain-relevant quality attention directly from the two domains. More specifically, the proposed QAM allows the generator to effectively select semantic-related characteristics from the spatial-wise and adaptively incorporate style-related attributes from the channel-wise, respectively. Therefore, in our proposed QAGAN, not only discriminators but also the generator can directly access both domains which significantly facilitates the generator to learn the mapping function. Extensive experimental results show that, compared with the state-of-the-art methods based on unpaired learning, our proposed method achieves better performance in both objective and subjective evaluations.
Inspired by the free-energy brain theory, which implies that human visual system (HVS) tends to reduce uncertainty and restore perceptual details upon seeing a distorted image, we propose restorative adversarial net (RAN), a GAN-based model for no-re ference image quality assessment (NR-IQA). RAN, which mimics the process of HVS, consists of three components: a restorator, a discriminator and an evaluator. The restorator restores and reconstructs input distorted image patches, while the discriminator distinguishes the reconstructed patches from the pristine distortion-free patches. After restoration, we observe that the perceptual distance between the restored and the distorted patches is monotonic with respect to the distortion level. We further define Gain of Restoration (GoR) based on this phenomenon. The evaluator predicts perceptual score by extracting feature representations from the distorted and restored patches to measure GoR. Eventually, the quality score of an input image is estimated by weighted sum of the patch scores. Experimental results on Waterloo Exploration, LIVE and TID2013 show the effectiveness and generalization ability of RAN compared to the state-of-the-art NR-IQA models.
Image extension models have broad applications in image editing, computational photography and computer graphics. While image inpainting has been extensively studied in the literature, it is challenging to directly apply the state-of-the-art inpainti ng methods to image extension as they tend to generate blurry or repetitive pixels with inconsistent semantics. We introduce semantic conditioning to the discriminator of a generative adversarial network (GAN), and achieve strong results on image extension with coherent semantics and visually pleasing colors and textures. We also show promising results in extreme extensions, such as panorama generation.
105 - Hui Ying , He Wang , Tianjia Shao 2021
Image generation has been heavily investigated in computer vision, where one core research challenge is to generate images from arbitrarily complex distributions with little supervision. Generative Adversarial Networks (GANs) as an implicit approach have achieved great successes in this direction and therefore been employed widely. However, GANs are known to suffer from issues such as mode collapse, non-structured latent space, being unable to compute likelihoods, etc. In this paper, we propose a new unsupervised non-parametric method named mixture of infinite conditional GANs or MIC-GANs, to tackle several GAN issues together, aiming for image generation with parsimonious prior knowledge. Through comprehensive evaluations across different datasets, we show that MIC-GANs are effective in structuring the latent space and avoiding mode collapse, and outperform state-of-the-art methods. MICGANs are adaptive, versatile, and robust. They offer a promising solution to several well-known GAN issues. Code available: github.com/yinghdb/MICGANs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا