ترغب بنشر مسار تعليمي؟ اضغط هنا

Performance Characterization of a Commercial Video Streaming Service

102   0   0.0 ( 0 )
 نشر من قبل Mojgan Ghasemi
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Despite the growing popularity of video streaming over the Internet, problems such as re-buffering and high startup latency continue to plague users. In this paper, we present an end-to-end characterization of Yahoos video streaming service, analyzing over 500 million video chunks downloaded over a two-week period. We gain unique visibility into the causes of performance degradation by instrumenting both the CDN server and the client player at the chunk level, while also collecting frequent snapshots of TCP variables from the server network stack. We uncover a range of performance issues, including an asynchronous disk-read timer and cache misses at the server, high latency and latency variability in the network, and buffering delays and dropped frames at the client. Looking across chunks in the same session, or destined to the same IP prefix, we see how some performance problems are relatively persistent, depending on the videos popularity, the distance between the client and server, and the clients operating system, browser, and Flash runtime.



قيم البحث

اقرأ أيضاً

We conduct to our knowledge a first measurement study of commercial 5G performance on smartphones by closely examining 5G networks of three carriers (two mmWave carriers, one mid-band carrier) in three U.S. cities. We conduct extensive field tests on 5G performance in diverse urban environments. We systematically analyze the handoff mechanisms in 5G and their impact on network performance. We explore the feasibility of using location and possibly other environmental information to predict the network performance. We also study the app performance (web browsing and HTTP download) over 5G. Our study consumes more than 15 TB of cellular data. Conducted when 5G just made its debut, it provides a baseline for studying how 5G performance evolves, and identifies key research directions on improving 5G users experience in a cross-layer manner. We have released the data collected from our study (referred to as 5Gophers) at https://fivegophers.umn.edu/www20.
Many of the video streaming applications in todays Internet involve the distribution of content from a CDN source to a large population of interested clients. However, widespread support of IP multicast is unavailable due to technical and economical reasons, leaving the floor to application layer multicast which introduces excessive delays for the clients and increased traffic load for the network. This paper is concerned with the introduction of an SDN-based framework that allows the network controller to not only deploy IP multicast between a source and subscribers, but also control, via a simple northbound interface, the distributed set of sources where multiple- description coded (MDC) video content is available. We observe that for medium to heavy network loads, relative to the state-of-the-art, the SDN-based streaming multicast video framework increases the PSNR of the received video significantly, from a level that is practically unwatchable to one that has good quality.
Intelligent and autonomous troubleshooting is a crucial enabler for the current 5G and future 6G networks. In this work, we develop a flexible architecture for detecting anomalies in adaptive video streaming comprising three main components: i) A pat tern recognizer that learns a typical pattern for video quality from the client-side application traces of a specific reference video, ii) A predictor for mapping Radio Frequency (RF) performance indicators collected on the network-side using user-based traces to a video quality measure, iii) An anomaly detector for comparing the predicted video quality pattern with the typical pattern to identify anomalies. We use real network traces (i.e., on-device measurements) collected in different geographical locations and at various times of day to train our machine learning models. We perform extensive numerical analysis to demonstrate key parameters impacting correct video quality prediction and anomaly detection. In particular, we have shown that the video playback time is the most crucial parameter determining the video quality since buffering continues during the playback and resulting in better video quality further into the playback. However, we also reveal that RF performance indicators characterizing the quality of the cellular connectivity are required to correctly predict QoE in anomalous cases. Then, we have exhibited that the mean maximum F1-score of our method is 77%, verifying the efficacy of our models. Our architecture is flexible and autonomous, so one can apply it to -- and operate with -- other user applications as long as the relevant user-based traces are available.
Nowadays Dynamic Adaptive Streaming over HTTP (DASH) is the most prevalent solution on the Internet for multimedia streaming and responsible for the majority of global traffic. DASH uses adaptive bit rate (ABR) algorithms, which select the video qual ity considering performance metrics such as throughput and playout buffer level. Pensieve is a system that allows to train ABR algorithms using reinforcement learning within a simulated network environment and is outperforming existing approaches in terms of achieved performance. In this paper, we demonstrate that the performance of the trained ABR algorithms depends on the implementation of the simulated environment used to train the neural network. We also show that the used congestion control algorithm impacts the algorithms performance due to cross-layer effects.
Due to the high bandwidth requirements and stringent delay constraints of multi-user wireless video transmission applications, ensuring that all video senders have sufficient transmission opportunities to use before their delay deadlines expire is a longstanding research problem. We propose a novel solution that addresses this problem without assuming detailed packet-level knowledge, which is unavailable at resource allocation time. Instead, we translate the transmission delay deadlines of each senders video packets into a monotonically-decreasing weight distribution within the considered time horizon. Higher weights are assigned to the slots that have higher probability for deadline-abiding delivery. Given the sets of weights of the senders video streams, we propose the low-complexity Delay-Aware Resource Allocation (DARA) approach to compute the optimal slot allocation policy that maximizes the deadline-abiding delivery of all senders. A unique characteristic of the DARA approach is that it yields a non-stationary slot allocation policy that depends on the allocation of previous slots. We prove that the DARA approach is optimal for weight distributions that are exponentially decreasing in time. We further implement our framework for real-time video streaming in wireless personal area networks that are gaining significant traction within the new Internet-of-Things (IoT) paradigm. For multiple surveillance videos encoded with H.264/AVC and streamed via the 6tisch framework that simulates the IoT-oriented IEEE 802.15.4e TSCH medium access control, our solution is shown to be the only one that ensures all video bitstreams are delivered with acceptable quality in a deadline-abiding manner.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا