Quality assessment plays a key role in creating and comparing video compression algorithms. Despite the development of a large number of new methods for assessing quality, generally accepted and well-known codecs comparisons mainly use the classical methods like PSNR, SSIM and new method VMAF. These methods can be calculated following different rules: they can use different frame-by-frame averaging techniques or different summation of color components. In this paper, a fundamental comparison of vario
We propose a new method for the visual quality assessment of 360-degree (omnidirectional) videos. The proposed method is based on computing multiple spatio-temporal objective quality features on viewports extracted from 360-degree videos. A new model is learnt to properly combine these features into a metric that closely matches subjective quality scores. The main motivations for the proposed approach are that: 1) quality metrics computed on viewports better captures the user experience than metrics computed on the projection domain; 2) the use of viewports easily supports different projection methods being used in current 360-degree video systems; and 3) no individual objective image quality metric always performs the best for all types of visual distortions, while a learned combination of them is able to adapt to different conditions. Experimental results, based on both the largest available 360-degree videos quality dataset and a cross-dataset validation, demonstrate that the proposed metric outperforms state-of-the-art 360-degree and 2D video quality metrics.
To effectively evaluate subjective visual quality in weakly-controlled environments, we propose an Adaptive Paired Comparison method based on particle filtering. As our approach requires each sample to be rated only once, the test time compared to regular paired comparison can be reduced. The method works with non-experts and improves reliability compared to MOS and DS-MOS methods.
Ranking and recommendation of multimedia content such as videos is usually realized with respect to the relevance to a user query. However, for lecture videos and MOOCs (Massive Open Online Courses) it is not only required to retrieve relevant videos, but particularly to find lecture videos of high quality that facilitate learning, for instance, independent of the videos or speakers popularity. Thus, metadata about a lecture videos quality are crucial features for learning contexts, e.g., lecture video recommendation in search as learning scenarios. In this paper, we investigate whether automatically extracted features are correlated to quality aspects of a video. A set of scholarly videos from a Mass Open Online Course (MOOC) is analyzed regarding audio, linguistic, and visual features. Furthermore, a set of cross-modal features is proposed which are derived by combining transcripts, audio, video, and slide content. A user study is conducted to investigate the correlations between the automatically collected features and human ratings of quality aspects of a lecture video. Finally, the impact of our features on the knowledge gain of the participants is discussed.
A key factor in designing 3D systems is to understand how different visual cues and distortions affect the perceptual quality of 3D video. The ultimate way to assess video quality is through subjective tests. However, subjective evaluation is time consuming, expensive, and in most cases not even possible. An alternative solution is objective quality metrics, which attempt to model the Human Visual System (HVS) in order to assess the perceptual quality. The potential of 3D technology to significantly improve the immersiveness of video content has been hampered by the difficulty of objectively assessing Quality of Experience (QoE). A no-reference (NR) objective 3D quality metric, which could help determine capturing parameters and improve playback perceptual quality, would be welcomed by camera and display manufactures. Network providers would embrace a full-reference (FR) 3D quality metric, as they could use it to ensure efficient QoE-based resource management during compression and Quality of Service (QoS) during transmission.
Anastasia Antsiferova
,Alexander Yakovenko
,Nickolay Safonov
.
(2021)
.
"Objective video quality metrics application to video codecs comparisons: choosing the best for subjective quality estimation"
.
Anastasia Antsiferova
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا