ترغب بنشر مسار تعليمي؟ اضغط هنا

DeepQoE: A unified Framework for Learning to Predict Video QoE

377   0   0.0 ( 0 )
 نشر من قبل Huaizheng Zhang
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motivated by the prowess of deep learning (DL) based techniques in prediction, generalization, and representation learning, we develop a novel framework called DeepQoE to predict video quality of experience (QoE). The end-to-end framework first uses a combination of DL techniques (e.g., word embeddings) to extract generalized features. Next, these features are combined and fed into a neural network for representation learning. Such representations serve as inputs for classification or regression tasks. Evaluating the performance of DeepQoE with two datasets, we show that for the small dataset, the accuracy of all shallow learning algorithm is improved by using the representation derived from DeepQoE. For the large dataset, our DeepQoE framework achieves significant performance improvement in comparison to the best baseline method (90.94% vs. 82.84%). Moreover, DeepQoE, also released as an open source tool, provides video QoE research much-needed flexibility in fitting different datasets, extracting generalized features, and learning representations.

قيم البحث

اقرأ أيضاً

Immersive media streaming, especially virtual reality (VR)/360-degree video streaming which is very bandwidth demanding, has become more and more popular due to the rapid growth of the multimedia and networking deployments. To better explore the usag e of resource and achieve better quality of experience (QoE) perceived by users, this paper develops an application-layer scheme to jointly exploit the available bandwidth from the LTE and Wi-Fi networks in 360-degree video streaming. This newly proposed scheme and the corresponding solution algorithms utilize the saliency of video, prediction of users view and the status information of users to obtain an optimal association of the users with different Wi-Fi access points (APs) for maximizing the systems utility. Besides, a novel buffer strategy is proposed to mitigate the influence of short-time prediction problem for transmitting 360-degree videos in time-varying networks. The promising performance and low complexity of the proposed scheme and algorithms are validated in simulations with various 360-degree videos.
Video watermarking embeds a message into a cover video in an imperceptible manner, which can be retrieved even if the video undergoes certain modifications or distortions. Traditional watermarking methods are often manually designed for particular ty pes of distortions and thus cannot simultaneously handle a broad spectrum of distortions. To this end, we propose a robust deep learning-based solution for video watermarking that is end-to-end trainable. Our model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales. It gains robustness against various distortions through a differentiable distortion layer, whereas non-differentiable distortions, such as popular video compression standards, are modeled by a differentiable proxy. Extensive evaluations on a wide variety of distortions show that our method outperforms traditional video watermarking methods as well as deep image watermarking models by a large margin. We further demonstrate the practicality of our method on a realistic video-editing application.
In this paper, we formulate the collaborative multi-user wireless video transmission problem as a multi-user Markov decision process (MUMDP) by explicitly considering the users heterogeneous video traffic characteristics, time-varying network conditi ons and the resulting dynamic coupling between the wireless users. These environment dynamics are often ignored in existing multi-user video transmission solutions. To comply with the decentralized nature of wireless networks, we propose to decompose the MUMDP into local MDPs using Lagrangian relaxation. Unlike in conventional multi-user video transmission solutions stemming from the network utility maximization framework, the proposed decomposition enables each wireless user to individually solve its own dynamic cross-layer optimization (i.e. the local MDP) and the network coordinator to update the Lagrangian multipliers (i.e. resource prices) based on not only current, but also future resource needs of all users, such that the long-term video quality of all users is maximized. However, solving the MUMDP requires statistical knowledge of the experienced environment dynamics, which is often unavailable before transmission time. To overcome this obstacle, we then propose a novel online learning algorithm, which allows the wireless users to update their policies in multiple states during one time slot. This is different from conventional learning solutions, which often update one state per time slot. The proposed learning algorithm can significantly improve the learning performance, thereby dramatically improving the video quality experienced by the wireless users over time. Our simulation results demonstrate the efficiency of the proposed MUMDP framework as compared to conventional multi-user video transmission solutions.
Soft-cast, a cross-layer design for wireless video transmission, is proposed to solve the drawbacks of digital video transmission: threshold transmission framework achieving the same effect. Specifically, in encoder, we carry out power allocation on the transformed coefficients and encode the coefficients based on the new formulation of power distortion. In decoder, the process of LLSE estimator is also improved. Accompanied with the inverse nonlinear transform, DCT coefficients can be recovered depending on the scaling factors , LLSE estimator coefficients and metadata. Experiment results show that our proposed framework outperforms the Soft-cast in PSNR 1.08 dB and the MSSIM gain reaches to 2.35% when transmitting under the same bandwidth and total power.
325 - Wei Quan , Yuxuan Pan , Bin Xiang 2020
With the merit of containing full panoramic content in one camera, Virtual Reality (VR) and 360-degree videos have attracted more and more attention in the field of industrial cloud manufacturing and training. Industrial Internet of Things (IoT), whe re many VR terminals needed to be online at the same time, can hardly guarantee VRs bandwidth requirement. However, by making use of users quality of experience (QoE) awareness factors, including the relative moving speed and depth difference between the viewpoint and other content, bandwidth consumption can be reduced. In this paper, we propose OFB-VR (Optical Flow Based VR), an interactive method of VR streaming that can make use of VR users QoE awareness to ease the bandwidth pressure. The Just-Noticeable Difference through Optical Flow Estimation (JND-OFE) is explored to quantify users awareness of quality distortion in 360-degree videos. Accordingly, a novel 360-degree videos QoE metric based on PSNR and JND-OFE (PSNR-OF) is proposed. With the help of PSNR-OF, OFB-VR proposes a versatile-size tiling scheme to lessen the tiling overhead. A Reinforcement Learning(RL) method is implemented to make use of historical data to perform Adaptive BitRate(ABR). For evaluation, we take two prior VR streaming schemes, Pano and Plato, as baselines. Vast evaluations show that our system can increase the mean PSNR-OF score by 9.5-15.8% while maintaining the same rebuffer ratio compared with Pano and Plato in a fluctuate LTE bandwidth dataset. Evaluation results show that OFB-VR is a promising prototype for actual interactive industrial VR. A prototype of OFB-VR can be found in https://github.com/buptexplorers/OFB-VR.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا