ترغب بنشر مسار تعليمي؟ اضغط هنا

A Knowledge-Driven Quality-of-Experience Model for Adaptive Streaming Videos

155   0   0.0 ( 0 )
 نشر من قبل Zhengfang Duanmu
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The fundamental conflict between the enormous space of adaptive streaming videos and the limited capacity for subjective experiment casts significant challenges to objective Quality-of-Experience (QoE) prediction. Existing objective QoE models exhibit complex functional form, failing to generalize well in diverse streaming environments. In this study, we propose an objective QoE model namely knowledge-driven streaming quality index (KSQI) to integrate prior knowledge on the human visual system and human annotated data in a principled way. By analyzing the subjective characteristics towards streaming videos from a corpus of subjective studies, we show that a family of QoE functions lies in a convex set. Using a variant of projected gradient descent, we optimize the objective QoE model over a database of training videos. The proposed KSQI demonstrates strong generalizability to diverse streaming environments, evident by state-of-the-art performance on four publicly available benchmark datasets.

قيم البحث

اقرأ أيضاً

The diversity of video delivery pipeline poses a grand challenge to the evaluation of adaptive bitrate (ABR) streaming algorithms and objective quality-of-experience (QoE) models. Here we introduce so-far the largest subject-rated database of its kin d, namely WaterlooSQoE-IV, consisting of 1350 adaptive streaming videos created from diverse source contents, video encoders, network traces, ABR algorithms, and viewing devices. We collect human opinions for each video with a series of carefully designed subjective experiments. Subsequent data analysis and testing/comparison of ABR algorithms and QoE models using the database lead to a series of novel observations and interesting findings, in terms of the effectiveness of subjective experiment methodologies, the interactions between user experience and source content, viewing device and encoder type, the heterogeneities in the bias and preference of user experiences, the behaviors of ABR algorithms, and the performance of objective QoE models. Most importantly, our results suggest that a better objective QoE model, or a better understanding of human perceptual experience and behaviour, is the most dominating factor in improving the performance of ABR algorithms, as opposed to advanced optimization frameworks, machine learning strategies or bandwidth predictors, where a majority of ABR research has been focused on in the past decade. On the other hand, our performance evaluation of 11 QoE models shows only a moderate correlation between state-of-the-art QoE models and subjective ratings, implying rooms for improvement in both QoE modeling and ABR algorithms. The database is made publicly available at: url{https://ece.uwaterloo.ca/~zduanmu/waterloosqoe4/}.
209 - Ren Yang , Mai Xu , Tie Liu 2017
The latest High Efficiency Video Coding (HEVC) standard has been increasingly applied to generate video streams over the Internet. However, HEVC compressed videos may incur severe quality degradation, particularly at low bit-rates. Thus, it is necess ary to enhance the visual quality of HEVC videos at the decoder side. To this end, this paper proposes a Quality Enhancement Convolutional Neural Network (QE-CNN) method that does not require any modification of the encoder to achieve quality enhancement for HEVC. In particular, our QE-CNN method learns QE-CNN-I and QE-CNN-P models to reduce the distortion of HEVC I and P frames, respectively. The proposed method differs from the existing CNN-based quality enhancement approaches, which only handle intra-coding distortion and are thus not suitable for P frames. Our experimental results validate that our QE-CNN method is effective in enhancing quality for both I and P frames of HEVC videos. To apply our QE-CNN method in time-constrained scenarios, we further propose a Time-constrained Quality Enhancement Optimization (TQEO) scheme. Our TQEO scheme controls the computational time of QE-CNN to meet a target, meanwhile maximizing the quality enhancement. Next, the experimental results demonstrate the effectiveness of our TQEO scheme from the aspects of time control accuracy and quality enhancement under different time constraints. Finally, we design a prototype to implement our TQEO scheme in a real-time scenario.
98 - Lingzhi Zhao , Ying Cui , Zhi Liu 2021
This paper investigates adaptive streaming of one or multiple tiled 360 videos from a multi-antenna base station (BS) to one or multiple single-antenna users, respectively, in a multi-carrier wireless system. We aim to maximize the video quality whil e keeping rebuffering time small via encoding rate adaptation at each group of pictures (GOP) and transmission adaptation at each (transmission) slot. To capture the impact of field-of-view (FoV) prediction, we consider three cases of FoV viewing probability distributions, i.e., perfect, imperfect, and unknown FoV viewing probability distributions, and use the average total utility, worst average total utility, and worst total utility as the respective performance metrics. In the single-user scenario, we optimize the encoding rates of the tiles, encoding rates of the FoVs, and transmission beamforming vectors for all subcarriers to maximize the total utility in each case. In the multi-user scenario, we adopt rate splitting with successive decoding and optimize the encoding rates of the tiles, encoding rates of the FoVs, rates of the common and private messages, and transmission beamforming vectors for all subcarriers to maximize the total utility in each case. Then, we separate the challenging optimization problem into multiple tractable problems in each scenario. In the single-user scenario, we obtain a globally optimal solution of each problem using transformation techniques and the Karush-Kuhn-Tucker (KKT) conditions. In the multi-user scenario, we obtain a KKT point of each problem using the concave-convex procedure (CCCP). Finally, numerical results demonstrate that the proposed solutions achieve notable gains over existing schemes in all three cases. To the best of our knowledge, this is the first work revealing the impact of FoV prediction on the performance of adaptive streaming of tiled 360 videos.
325 - Wei Quan , Yuxuan Pan , Bin Xiang 2020
With the merit of containing full panoramic content in one camera, Virtual Reality (VR) and 360-degree videos have attracted more and more attention in the field of industrial cloud manufacturing and training. Industrial Internet of Things (IoT), whe re many VR terminals needed to be online at the same time, can hardly guarantee VRs bandwidth requirement. However, by making use of users quality of experience (QoE) awareness factors, including the relative moving speed and depth difference between the viewpoint and other content, bandwidth consumption can be reduced. In this paper, we propose OFB-VR (Optical Flow Based VR), an interactive method of VR streaming that can make use of VR users QoE awareness to ease the bandwidth pressure. The Just-Noticeable Difference through Optical Flow Estimation (JND-OFE) is explored to quantify users awareness of quality distortion in 360-degree videos. Accordingly, a novel 360-degree videos QoE metric based on PSNR and JND-OFE (PSNR-OF) is proposed. With the help of PSNR-OF, OFB-VR proposes a versatile-size tiling scheme to lessen the tiling overhead. A Reinforcement Learning(RL) method is implemented to make use of historical data to perform Adaptive BitRate(ABR). For evaluation, we take two prior VR streaming schemes, Pano and Plato, as baselines. Vast evaluations show that our system can increase the mean PSNR-OF score by 9.5-15.8% while maintaining the same rebuffer ratio compared with Pano and Plato in a fluctuate LTE bandwidth dataset. Evaluation results show that OFB-VR is a promising prototype for actual interactive industrial VR. A prototype of OFB-VR can be found in https://github.com/buptexplorers/OFB-VR.
Quality assessment of in-the-wild videos is a challenging problem because of the absence of reference videos and shooting distortions. Knowledge of the human visual system can help establish methods for objective quality assessment of in-the-wild vid eos. In this work, we show two eminent effects of the human visual system, namely, content-dependency and temporal-memory effects, could be used for this purpose. We propose an objective no-reference video quality assessment method by integrating both effects into a deep neural network. For content-dependency, we extract features from a pre-trained image classification neural network for its inherent content-aware property. For temporal-memory effects, long-term dependencies, especially the temporal hysteresis, are integrated into the network with a gated recurrent unit and a subjectively-inspired temporal pooling layer. To validate the performance of our method, experiments are conducted on three publicly available in-the-wild video quality assessment databases: KoNViD-1k, CVD2014, and LIVE-Qualcomm, respectively. Experimental results demonstrate that our proposed method outperforms five state-of-the-art methods by a large margin, specifically, 12.39%, 15.71%, 15.45%, and 18.09% overall performance improvements over the second-best method VBLIINDS, in terms of SROCC, KROCC, PLCC and RMSE, respectively. Moreover, the ablation study verifies the crucial role of both the content-aware features and the modeling of temporal-memory effects. The PyTorch implementation of our method is released at https://github.com/lidq92/VSFA.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا