Do you want to publish a course? Click here

Non-stationary Resource Allocation Policies for Delay-constrained Video Streaming: Application to Video over Internet-of-Things-enabled Networks

318   0   0.0 ( 0 )
 Added by Jie Xu
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Due to the high bandwidth requirements and stringent delay constraints of multi-user wireless video transmission applications, ensuring that all video senders have sufficient transmission opportunities to use before their delay deadlines expire is a longstanding research problem. We propose a novel solution that addresses this problem without assuming detailed packet-level knowledge, which is unavailable at resource allocation time. Instead, we translate the transmission delay deadlines of each senders video packets into a monotonically-decreasing weight distribution within the considered time horizon. Higher weights are assigned to the slots that have higher probability for deadline-abiding delivery. Given the sets of weights of the senders video streams, we propose the low-complexity Delay-Aware Resource Allocation (DARA) approach to compute the optimal slot allocation policy that maximizes the deadline-abiding delivery of all senders. A unique characteristic of the DARA approach is that it yields a non-stationary slot allocation policy that depends on the allocation of previous slots. We prove that the DARA approach is optimal for weight distributions that are exponentially decreasing in time. We further implement our framework for real-time video streaming in wireless personal area networks that are gaining significant traction within the new Internet-of-Things (IoT) paradigm. For multiple surveillance videos encoded with H.264/AVC and streamed via the 6tisch framework that simulates the IoT-oriented IEEE 802.15.4e TSCH medium access control, our solution is shown to be the only one that ensures all video bitstreams are delivered with acceptable quality in a deadline-abiding manner.



rate research

Read More

Many of the video streaming applications in todays Internet involve the distribution of content from a CDN source to a large population of interested clients. However, widespread support of IP multicast is unavailable due to technical and economical reasons, leaving the floor to application layer multicast which introduces excessive delays for the clients and increased traffic load for the network. This paper is concerned with the introduction of an SDN-based framework that allows the network controller to not only deploy IP multicast between a source and subscribers, but also control, via a simple northbound interface, the distributed set of sources where multiple- description coded (MDC) video content is available. We observe that for medium to heavy network loads, relative to the state-of-the-art, the SDN-based streaming multicast video framework increases the PSNR of the received video significantly, from a level that is practically unwatchable to one that has good quality.
Nowadays Dynamic Adaptive Streaming over HTTP (DASH) is the most prevalent solution on the Internet for multimedia streaming and responsible for the majority of global traffic. DASH uses adaptive bit rate (ABR) algorithms, which select the video quality considering performance metrics such as throughput and playout buffer level. Pensieve is a system that allows to train ABR algorithms using reinforcement learning within a simulated network environment and is outperforming existing approaches in terms of achieved performance. In this paper, we demonstrate that the performance of the trained ABR algorithms depends on the implementation of the simulated environment used to train the neural network. We also show that the used congestion control algorithm impacts the algorithms performance due to cross-layer effects.
User dissatisfaction due to buffering pauses during streaming is a significant cost to the system, which we model as a non-decreasing function of the frequency of buffering pause. Minimization of total user dissatisfaction in a multi-channel cellular network leads to a non-convex problem. Utilizing a combinatorial structure in this problem, we first propose a polynomial time joint admission control and channel allocation algorithm which is provably (almost) optimal. This scheme assumes that the base station (BS) knows the frame statistics of the streams. In a more practical setting, where these statistics are not available a priori at the BS, a learning based scheme with provable guarantees is developed. This learning based scheme has relation to regret minimization in multi-armed bandits with non-i.i.d. and delayed reward (cost). All these algorithms require none to minimal feedback from the user equipment to the base station regarding the states of the media player buffer at the application layer, and hence, are of practical interest.
Immersive media streaming, especially virtual reality (VR)/360-degree video streaming which is very bandwidth demanding, has become more and more popular due to the rapid growth of the multimedia and networking deployments. To better explore the usage of resource and achieve better quality of experience (QoE) perceived by users, this paper develops an application-layer scheme to jointly exploit the available bandwidth from the LTE and Wi-Fi networks in 360-degree video streaming. This newly proposed scheme and the corresponding solution algorithms utilize the saliency of video, prediction of users view and the status information of users to obtain an optimal association of the users with different Wi-Fi access points (APs) for maximizing the systems utility. Besides, a novel buffer strategy is proposed to mitigate the influence of short-time prediction problem for transmitting 360-degree videos in time-varying networks. The promising performance and low complexity of the proposed scheme and algorithms are validated in simulations with various 360-degree videos.
With the rapid growth of Internet of Things (IoT) devices, the next generation mobile networks demand for more operating frequency bands. By leveraging the underutilized radio spectrum, the cognitive radio (CR) technology is considered as a promising solution for spectrum scarcity problem of IoT applications. In parallel with the development of CR techniques, Wireless Energy Harvesting (WEH) is considered as one of the emerging technologies to eliminate the need of recharging or replacing the batteries for IoT and CR networks. To this end, we propose to utilize WEH for CR networks in which the CR devices are not only capable of sensing the available radio frequencies in a collaborative manner but also harvesting the wireless energy transferred by an Access Point (AP). More importantly, we design an optimization framework that captures a fundamental tradeoff between energy efficiency (EE) and spectral efficiency (SE) of the network. In particular, we formulate a Mixed Integer Nonlinear Programming (MINLP) problem that maximizes EE while taking into consideration of users buffer occupancy, data rate fairness, energy causality constraints and interference constraints. We further prove that the proposed optimization framework is an NP-Hard problem. Thus, we propose a low complex heuristic algorithm, called INSTANT, to solve the resource allocation and energy harvesting optimization problem. The proposed algorithm is shown to be capable of achieving near optimal solution with high accuracy while having polynomial complexity. The efficiency of our proposal is validated through well designed simulations.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا