Do you want to publish a course? Click here

Towards Retina-Quality VR Video Streaming: 15ms Could Save You 80% of Your Bandwidth

74   0   0.0 ( 0 )
 Added by Luke Hsiao
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Virtual reality systems today cannot yet stream immersive, retina-quality virtual reality video over a network. One of the greatest challenges to this goal is the sheer data rates required to transmit retina-quality video frames at high resolutions and frame rates. Recent work has leveraged the decay of visual acuity in human perception in novel gaze-contingent video compression techniques. In this paper, we show that reducing the motion-to-photon latency of a system itself is a key method for improving the compression ratio of gaze-contingent compression. Our key finding is that a client and streaming server system with sub-15ms latency can achieve 5x better compression than traditional techniques while also using simpler software algorithms than previous work.

rate research

Read More

In this paper, we study the optimal wireless streaming of a multi-quality tiled 360 virtual reality (VR) video from a multi-antenna server to multiple single-antenna users in a multiple-input multiple-output (MIMO)-orthogonal frequency division multiple access (OFDMA) system. In the scenario without user transcoding, we jointly optimize beamforming and subcarrier, transmission power, and rate allocation to minimize the total transmission power. This problem is a challenging mixed discretecontinuous optimization problem. We obtain a globally optimal solution for small multicast groups, an asymptotically optimal solution for a large antenna array, and a suboptimal solution for the general case. In the scenario with user transcoding, we jointly optimize the quality level selection, beamforming, and subcarrier, transmission power, and rate allocation to minimize the weighted sum of the average total transmission power and the transcoding power. This problem is a two-timescale mixed discrete-continuous optimization problem, which is even more challenging than the problem for the scenario without user transcoding. We obtain a globally optimal solution for small multicast groups, an asymptotically optimal solution for a large antenna array, and a low-complexity suboptimal solution for the general case. Finally, numerical results demonstrate the significant gains of proposed solutions over the existing solutions. significant gains of proposed solutions over the existing solutions.
Inferring the quality of streaming video applications is important for Internet service providers, but the fact that most video streams are encrypted makes it difficult to do so. We develop models that infer quality metrics (ie, startup delay and resolution) for encrypted streaming video services. Our paper builds on previous work, but extends it in several ways. First, the model works in deployment settings where the video sessions and segments must be identified from a mix of traffic and the time precision of the collected traffic statistics is more coarse (eg, due to aggregation). Second, we develop a single composite model that works for a range of different services (i.e., Netflix, YouTube, Amazon, and Twitch), as opposed to just a single service. Third, unlike many previous models, the model performs predictions at finer granularity (eg, the precise startup delay instead of just detecting short versus long delays) allowing to draw better conclusions on the ongoing streaming quality. Fourth, we demonstrate the model is practical through a 16-month deployment in 66 homes and provide new insights about the relationships between Internet speed and the quality of the corresponding video streams, for a variety of services; we find that higher speeds provide only minimal improvements to startup delay and resolution.
We present a method to edit a target portrait footage by taking a sequence of audio as input to synthesize a photo-realistic video. This method is unique because it is highly dynamic. It does not assume a person-specific rendering network yet capable of translating arbitrary source audio into arbitrary video output. Instead of learning a highly heterogeneous and nonlinear mapping from audio to the video directly, we first factorize each target video frame into orthogonal parameter spaces, i.e., expression, geometry, and pose, via monocular 3D face reconstruction. Next, a recurrent network is introduced to translate source audio into expression parameters that are primarily related to the audio content. The audio-translated expression parameters are then used to synthesize a photo-realistic human subject in each video frame, with the movement of the mouth regions precisely mapped to the source audio. The geometry and pose parameters of the target human portrait are retained, therefore preserving the context of the original video footage. Finally, we introduce a novel video rendering network and a dynamic programming method to construct a temporally coherent and photo-realistic video. Extensive experiments demonstrate the superiority of our method over existing approaches. Our method is end-to-end learnable and robust to voice variations in the source audio.
Despite the growing popularity of video streaming over the Internet, problems such as re-buffering and high startup latency continue to plague users. In this paper, we present an end-to-end characterization of Yahoos video streaming service, analyzing over 500 million video chunks downloaded over a two-week period. We gain unique visibility into the causes of performance degradation by instrumenting both the CDN server and the client player at the chunk level, while also collecting frequent snapshots of TCP variables from the server network stack. We uncover a range of performance issues, including an asynchronous disk-read timer and cache misses at the server, high latency and latency variability in the network, and buffering delays and dropped frames at the client. Looking across chunks in the same session, or destined to the same IP prefix, we see how some performance problems are relatively persistent, depending on the videos popularity, the distance between the client and server, and the clients operating system, browser, and Flash runtime.
Many of the video streaming applications in todays Internet involve the distribution of content from a CDN source to a large population of interested clients. However, widespread support of IP multicast is unavailable due to technical and economical reasons, leaving the floor to application layer multicast which introduces excessive delays for the clients and increased traffic load for the network. This paper is concerned with the introduction of an SDN-based framework that allows the network controller to not only deploy IP multicast between a source and subscribers, but also control, via a simple northbound interface, the distributed set of sources where multiple- description coded (MDC) video content is available. We observe that for medium to heavy network loads, relative to the state-of-the-art, the SDN-based streaming multicast video framework increases the PSNR of the received video significantly, from a level that is practically unwatchable to one that has good quality.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا