ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical Learning Based Congestion Control for Real-time Video Communication

88   0   0.0 ( 0 )
 نشر من قبل Tongyu Dai
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the increasing demands on interactive video applications, how to adapt video bit rate to avoid network congestion has become critical, since congestion results in self-inflicted delay and packet loss which deteriorate the quality of real-time video service. The existing congestion control is hard to simultaneously achieve low latency, high throughput, good adaptability and fair bandwidth allocation, mainly because of the hardwired control strategy and egocentric convergence objective. To address these issues, we propose an end-to-end statistical learning based congestion control, named Iris. By exploring the underlying principles of self-inflicted delay, we reveal that congestion delay is determined by sending rate, receiving rate and network status, which inspires us to control video bit rate using a statistical-learning congestion control model. The key idea of Iris is to force all flows to converge to the same queue load, and adjust the bit rate by the model. All flows keep a small and fixed number of packets queuing in the network, thus the fair bandwidth allocation and low latency are both achieved. Besides, the adjustment step size of sending rate is updated by online learning, to better adapt to dynamically changing networks. We carried out extensive experiments to evaluate the performance of Iris, with the implementations of transport layer (UDP) and application layer (QUIC) respectively. The testing environment includes emulated network, real-world Internet and commercial LTE networks. Compared against TCP flavors and state-of-the-art protocols, Iris is able to achieve high bandwidth utilization, low latency and good fairness concurrently. Especially over QUIC, Iris is able to increase the video bitrate up to 25%, and PSNR up to 1dB.



قيم البحث

اقرأ أيضاً

280 - Joyce Fang , Martin Ellis , Bin Li 2019
Bandwidth estimation and congestion control for real-time communications (i.e., audio and video conferencing) remains a difficult problem, despite many years of research. Achieving high quality of experience (QoE) for end users requires continual upd ates due to changing network architectures and technologies. In this paper, we apply reinforcement learning for the first time to the problem of real-time communications (RTC), where we seek to optimize user-perceived quality. We present initial proof-of-concept results, where we learn an agent to control sending rate in an RTC system, evaluating using both network simulation and real Internet video calls. We discuss the challenges we observed, particularly in designing realistic reward functions that reflect QoE, and in bridging the gap between the training environment and real-world networks.
Due to the presence of buffers in the inner network nodes, each congestion event leads to buffer queueing and thus to an increasing end-to-end delay. In the case of delay sensitive applications, a large delay might not be acceptable and a solution to properly manage congestion events while maintaining a low end-to-end delay is required. Delay-based congestion algorithms are a viable solution as they target to limit the experienced end-to-end delay. Unfortunately, they do not perform well when sharing the bandwidth with congestion control algorithms not regulated by delay constraints (e.g., loss-based algorithms). Our target is to fill this gap, proposing a novel congestion control algorithm for delay-constrained communication over best effort packet switched networks. The proposed algorithm is able to maintain a bounded queueing delay when competing with other delay-based flows, and avoid starvation when competing with loss-based flows. We adopt the well-known price-based distributed mechanism as congestion control, but: 1) we introduce a novel non-linear mapping between the experienced delay and the price function and 2) we combine both delay and loss information into a single price term based on packet interarrival measurements. We then provide a stability analysis for our novel algorithm and we show its performance in the simulation results carried out in the NS3 framework. Simulation results demonstrate that the proposed algorithm is able to: achieve good intra-protocol fairness properties, control efficiently the end-to-end delay, and finally, protect the flow from starvation when other flows cause the queuing delay to grow excessively.
Omnidirectional applications are immersive and highly interactive, which can improve the efficiency of remote collaborative work among factory workers. The transmission of omnidirectional video (OV) is the most important step in implementing virtual remote collaboration. Compared with the ordinary video transmission, OV transmission requires more bandwidth, which is still a huge burden even under 5G networks. The tile-based scheme can reduce bandwidth consumption. However, it neither accurately obtain the field of view(FOV) area, nor difficult to support real-time OV streaming. In this paper, we propose an edge-assisted viewport adaptive scheme (EVAS-OV) to reduce bandwidth consumption during real-time OV transmission. First, EVAS-OV uses a Gated Recurrent Unit(GRU) model to predict users viewport. Then, users were divided into multicast clusters thereby further reducing the consumption of computing resources. EVAS-OV reprojects OV frames to accurately obtain users FOV area from pixel level and adopt a redundant strategy to reduce the impact of viewport prediction errors. All computing tasks were offloaded to edge servers to reduce the transmission delay and improve bandwidth utilization. Experimental results show that EVAS-OV can save more than 60% of bandwidth compared with the non-viewport adaptive scheme. Compared to a two-layer scheme with viewport adaptive, EVAS-OV still saves 30% of bandwidth.
In this article, we study the problem of air-to-ground ultra-reliable and low-latency communication (URLLC) for a moving ground user. This is done by controlling multiple unmanned aerial vehicles (UAVs) in real time while avoiding inter-UAV collision s. To this end, we propose a novel multi-agent deep reinforcement learning (MADRL) framework, coined a graph attention exchange network (GAXNet). In GAXNet, each UAV constructs an attention graph locally measuring the level of attention to its neighboring UAVs, while exchanging the attention weights with other UAVs so as to reduce the attention mismatch between them. Simulation results corroborates that GAXNet achieves up to 4.5x higher rewards during training. At execution, without incurring inter-UAV collisions, GAXNet achieves 6.5x lower latency with the target 0.0000001 error rate, compared to a state-of-the-art baseline framework.
165 - Laura Toni , Thomas Maugey , 2012
In multiview applications, multiple cameras acquire the same scene from different viewpoints and generally produce correlated video streams. This results in large amounts of highly redundant data. In order to save resources, it is critical to handle properly this correlation during encoding and transmission of the multiview data. In this work, we propose a correlation-aware packet scheduling algorithm for multi-camera networks, where information from all cameras are transmitted over a bottleneck channel to clients that reconstruct the multiview images. The scheduling algorithm relies on a new rate-distortion model that captures the importance of each view in the scene reconstruction. We propose a problem formulation for the optimization of the packet scheduling policies, which adapt to variations in the scene content. Then, we design a low complexity scheduling algorithm based on a trellis search that selects the subset of candidate packets to be transmitted towards effective multiview reconstruction at clients. Extensive simulation results confirm the gain of our scheduling algorithm when inter-source correlation information is used in the scheduler, compared to scheduling policies with no information about the correlation or non-adaptive scheduling policies. We finally show that increasing the optimization horizon in the packet scheduling algorithm improves the transmission performance, especially in scenarios where the level of correlation rapidly varies with time.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا