ترغب بنشر مسار تعليمي؟ اضغط هنا

Learned Fast HEVC Intra Coding

317   0   0.0 ( 0 )
 نشر من قبل Jun Shi
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

In High Efficiency Video Coding (HEVC), excellent rate-distortion (RD) performance is achieved in part by having a flexible quadtree coding unit (CU) partition and a large number of intra-prediction modes. Such an excellent RD performance is achieved at the expense of much higher computational complexity. In this paper, we propose a learned fast HEVC intra coding (LFHI) framework taking into account the comprehensive factors of fast intra coding to reach an improved configurable tradeoff between coding performance and computational complexity. First, we design a low-complex shallow asymmetric-kernel CNN (AK-CNN) to efficiently extract the local directional texture features of each block for both fast CU partition and fast intra-mode decision. Second, we introduce the concept of the minimum number of RDO candidates (MNRC) into fast mode decision, which utilizes AK-CNN to predict the minimum number of best candidates for RDO calculation to further reduce the computation of intra-mode selection. Third, an evolution optimized threshold decision (EOTD) scheme is designed to achieve configurable complexity-efficiency tradeoffs. Finally, we propose an interpolation-based prediction scheme that allows for our framework to be generalized to all quantization parameters (QPs) without the need for training the network on each QP. The experimental results demonstrate that the LFHI framework has a high degree of parallelism and achieves a much better complexity-efficiency tradeoff, achieving up to 75.2% intra-mode encoding complexity reduction with negligible rate-distortion performance degradation, superior to the existing fast intra-coding schemes.



قيم البحث

اقرأ أيضاً

250 - Ming Lu , Ming Cheng , Yiling Xu 2019
Networked video applications, e.g., video conferencing, often suffer from poor visual quality due to unexpected network fluctuation and limited bandwidth. In this paper, we have developed a Quality Enhancement Network (QENet) to reduce the video comp ression artifacts, leveraging the spatial and temporal priors generated by respective multi-scale convolutions spatially and warped temporal predictions in a recurrent fashion temporally. We have integrated this QENet as a standard-alone post-processing subsystem to the High Efficiency Video Coding (HEVC) compliant decoder. Experimental results show that our QENet demonstrates the state-of-the-art performance against default in-loop filters in HEVC and other deep learning based methods with noticeable objective gains in Peak-Signal-to-Noise Ratio (PSNR) and subjective gains visually.
While learned video codecs have demonstrated great promise, they have yet to achieve sufficient efficiency for practical deployment. In this work, we propose several novel ideas for learned video compression which allow for improved performance for t he low-latency mode (I- and P-frames only) along with a considerable increase in computational efficiency. In this setting, for natural videos our approach compares favorably across the entire R-D curve under metrics PSNR, MS-SSIM and VMAF against all mainstream video standards (H.264, H.265, AV1) and all ML codecs. At the same time, our approach runs at least 5x faster and has fewer parameters than all ML codecs which report these figures. Our contributions include a flexible-rate framework allowing a single model to cover a large and dense range of bitrates, at a negligible increase in computation and parameter count; an efficient backbone optimized for ML-based codecs; and a novel in-loop flow prediction scheme which leverages prior information towards more efficient compression. We benchmark our method, which we call ELF-VC (Efficient, Learned and Flexible Video Coding) on popular video test sets UVG and MCL-JCV under metrics PSNR, MS-SSIM and VMAF. For example, on UVG under PSNR, it reduces the BD-rate by 44% against H.264, 26% against H.265, 15% against AV1, and 35% against the current best ML codec. At the same time, on an NVIDIA Titan V GPU our approach encodes/decodes VGA at 49/91 FPS, HD 720 at 19/35 FPS, and HD 1080 at 10/18 FPS.
Today, according to the Cisco Annual Internet Report (2018-2023), the fastest-growing category of Internet traffic is machine-to-machine communication. In particular, machine-to-machine communication of images and videos represents a new challenge an d opens up new perspectives in the context of data compression. One possible solution approach consists of adapting current human-targeted image and video coding standards to the use case of machine consumption. Another approach consists of developing completely new compression paradigms and architectures for machine-to-machine communications. In this paper, we focus on image compression and present an inference-time content-adaptive finetuning scheme that optimizes the latent representation of an end-to-end learned image codec, aimed at improving the compression efficiency for machine-consumption. The conducted experiments show that our online finetuning brings an average bitrate saving (BD-rate) of -3.66% with respect to our pretrained image codec. In particular, at low bitrate points, our proposed method results in a significant bitrate saving of -9.85%. Overall, our pretrained-and-then-finetuned system achieves -30.54% BD-rate over the state-of-the-art image/video codec Versatile Video Coding (VVC).
We propose an intra frame predictive strategy for compression of 3D point cloud attributes. Our approach is integrated with the region adaptive graph Fourier transform (RAGFT), a multi-resolution transform formed by a composition of localized block t ransforms, which produces a set of low pass (approximation) and high pass (detail) coefficients at multiple resolutions. Since the transform operations are spatially localized, RAGFT coefficients at a given resolution may still be correlated. To exploit this phenomenon, we propose an intra-prediction strategy, in which decoded approximation coefficients are used to predict uncoded detail coefficients. The prediction residuals are then quantized and entropy coded. For the 8i dataset, we obtain gains up to 0.5db as compared to intra predicted point cloud compresion based on the region adaptive Haar transform (RAHT).
This paper addresses neural network based post-processing for the state-of-the-art video coding standard, High Efficiency Video Coding (HEVC). We first propose a partition-aware Convolution Neural Network (CNN) that utilizes the partition information produced by the encoder to assist in the post-processing. In contrast to existing CNN-based approaches, which only take the decoded frame as input, the proposed approach considers the coding unit (CU) size information and combines it with the distorted decoded frame such that the artifacts introduced by HEVC are efficiently reduced. We further introduce an adaptive-switching neural network (ASN) that consists of multiple independent CNNs to adaptively handle the variations in content and distortion within compressed-video frames, providing further reduction in visual artifacts. Additionally, an iterative training procedure is proposed to train these independent CNNs attentively on different local patch-wise classes. Experiments on benchmark sequences demonstrate the effectiveness of our partition-aware and adaptive-switching neural networks. The source code can be found at http://min.sjtu.edu.cn/lwydemo/HEVCpostprocessing.html.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا