No Arabic abstract
As the immersive multimedia techniques like Free-viewpoint TV (FTV) develop at an astonishing rate, users demand for high-quality immersive contents increases dramatically. Unlike traditional uniform artifacts, the distortions within immersive contents could be non-uniform structure-related and thus are challenging for commonly used quality metrics. Recent studies have demonstrated that the representation of visual features can be extracted from multiple levels of the hierarchy. Inspired by the hierarchical representation mechanism in the human visual system (HVS), in this paper, we explore to adopt structural representations to quantitatively measure the impact of such structure-related distortion on perceived quality in FTV scenario. More specifically, a bio-inspired full reference image quality metric is proposed based on 1) low-level contour descriptor; 2) mid-level contour category descriptor; and 3) task-oriented non-natural structure descriptor. The experimental results show that the proposed model outperforms significantly the state-of-the-art metrics.
This paper reports on the NTIRE 2021 challenge on perceptual image quality assessment (IQA), held in conjunction with the New Trends in Image Restoration and Enhancement workshop (NTIRE) workshop at CVPR 2021. As a new type of image processing technology, perceptual image processing algorithms based on Generative Adversarial Networks (GAN) have produced images with more realistic textures. These output images have completely different characteristics from traditional distortions, thus pose a new challenge for IQA methods to evaluate their visual quality. In comparison with previous IQA challenges, the training and testing datasets in this challenge include the outputs of perceptual image processing algorithms and the corresponding subjective scores. Thus they can be used to develop and evaluate IQA methods on GAN-based distortions. The challenge has 270 registered participants in total. In the final testing stage, 13 participating teams submitted their models and fact sheets. Almost all of them have achieved much better results than existing IQA methods, while the winning method can demonstrate state-of-the-art performance.
Recently, image quality assessment (IQA) has achieved remarkable progress with the success of deep learning. However, the strict pre-condition of full-reference (FR) methods has limited its application in real scenarios. And the no-reference (NR) scheme is also inconvenient due to its unsatisfying performance as a result of ignoring the essence of image quality. In this paper, we introduce a brand new scheme, namely external-reference image quality assessment (ER-IQA), by introducing external reference images to bridge the gap between FR and NR-IQA. As the first implementation and a new baseline of ER-IQA, we propose a new Unpaired-IQA network to process images in a content-unpaired manner. A Mutual Attention-based Feature Enhancement (MAFE) module is well-designed for the unpaired features in ER-IQA. The MAFE module allows the network to extract quality-discriminative features from distorted images and content variability-robust features from external reference ones. Extensive experiments demonstrate that the proposed model outperforms the state-of-the-art NR-IQA methods, verifying the effectiveness of ER-IQA and the possibility of narrowing the gap of the two existing categories.
Image quality assessment is critical to control and maintain the perceived quality of visual content. Both subjective and objective evaluations can be utilised, however, subjective image quality assessment is currently considered the most reliable approach. Databases containing distorted images and mean opinion scores are needed in the field of atmospheric research with a view to improve the current state-of-the-art methodologies. In this paper, we focus on using ground-based sky camera images to understand the atmospheric events. We present a new image quality assessment dataset containing original and distorted nighttime images of sky/cloud from SWINSEG database. Subjective quality assessment was carried out in controlled conditions, as recommended by the ITU. Statistical analyses of the subjective scores showed the impact of noise type and distortion level on the perceived quality.
Image quality assessment (IQA) is the key factor for the fast development of image restoration (IR) algorithms. The most recent IR methods based on Generative Adversarial Networks (GANs) have achieved significant improvement in visual performance, but also presented great challenges for quantitative evaluation. Notably, we observe an increasing inconsistency between perceptual quality and the evaluation results. Then we raise two questions: (1) Can existing IQA methods objectively evaluate recent IR algorithms? (2) When focus on beating current benchmarks, are we getting better IR algorithms? To answer these questions and promote the development of IQA methods, we contribute a large-scale IQA dataset, called Perceptual Image Processing Algorithms (PIPAL) dataset. Especially, this dataset includes the results of GAN-based methods, which are missing in previous datasets. We collect more than 1.13 million human judgments to assign subjective scores for PIPAL images using the more reliable Elo system. Based on PIPAL, we present new benchmarks for both IQA and super-resolution methods. Our results indicate that existing IQA methods cannot fairly evaluate GAN-based IR algorithms. While using appropriate evaluation methods is important, IQA methods should also be updated along with the development of IR algorithms. At last, we improve the performance of IQA networks on GAN-based distortions by introducing anti-aliasing pooling. Experiments show the effectiveness of the proposed method.
This paper describes a quality assessment model for perceptual video compression applications (PVM), which stimulates visual masking and distortion-artefact perception using an adaptive combination of noticeable distortions and blurring artefacts. The method shows significant improvement over existing quality metrics based on the VQEG database, and provides compatibility with in-loop rate-quality optimisation for next generation video codecs due to its latency and complexity attributes. Performance comparison are validated against a range of different distortion types.