Do you want to publish a course? Click here

A Collaborative Framework for In-network Video Caching in Mobile Networks

130   0   0.0 ( 0 )
 Added by Jun He
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

Due to explosive growth of online video content in mobile wireless networks, in-network caching is becoming increasingly important to improve the end-user experience and reduce the Internet access cost for mobile network operators. However, caching is a difficult problem due to the very large number of online videos and video requests,limited capacity of caching nodes, and limited bandwidth of in-network links. Existing solutions that rely on static configurations and average request arrival rates are insufficient to handle dynamic request patterns effectively. In this paper, we propose a dynamic collaborative video caching framework to be deployed in mobile networks. We decompose the caching problem into a content placement subproblem and a source-selection subproblem. We then develop SRS (System capacity Reservation Strategy) to solve the content placement subproblem, and LinkShare, an adaptive traffic-aware algorithm to solve the source selection subproblem. Our framework supports congestion avoidance and allows merging multiple requests for the same video into one request. We carry extensive simulations to validate the proposed schemes. Simulation results show that our SRS algorithm achieves performance within 1-3% of the optimal values and LinkShare significantly outperforms existing solutions.

rate research

Read More

Recently, Mobile-Edge Computing (MEC) has arisen as an emerging paradigm that extends cloud-computing capabilities to the edge of the Radio Access Network (RAN) by deploying MEC servers right at the Base Stations (BSs). In this paper, we envision a collaborative joint caching and processing strategy for on-demand video streaming in MEC networks. Our design aims at enhancing the widely used Adaptive BitRate (ABR) streaming technology, where multiple bitra
This paper comprehensively studies a content-centric mobile network based on a preference learning framework, where each mobile user is equipped with a finite-size cache. We consider a practical scenario where each user requests a content file according to its own preferences, which is motivated by the existence of heterogeneity in file preferences among different users. Under our model, we consider a single-hop-based device-to-device (D2D) content delivery protocol and characterize the average hit ratio for the following two file preference cases: the personalized file preferences and the common file preferences. By assuming that the model parameters such as user activity levels, user file preferences, and file popularity are unknown and thus need to be inferred, we present a collaborative filtering (CF)-based approach to learn these parameters. Then, we reformulate the hit ratio maximization problems into a submodular function maximization and propose two computationally efficient algorithms including a greedy approach to efficiently solve the cache allocation problems. We analyze the computational complexity of each algorithm. Moreover, we analyze the corresponding level of the approximation that our greedy algorithm can achieve compared to the optimal solution. Using a real-world dataset, we demonstrate that the proposed framework employing the personalized file preferences brings substantial gains over its counterpart for various system parameters.
We consider the problem of video caching across a set of 5G small-cell base stations (SBS) connected to each other over a high-capacity short-delay back-haul link, and linked to a remote server over a long-delay connection. Even though the problem of minimizing the overall video delivery delay is NP-hard, the Collaborative Caching Algorithm (CCA) that we present can efficiently compute a solution close to the optimal, where the degree of sub-optimality depends on the worst case video-to-cache size ratio. The algorithm is naturally amenable to distributed implementation that requires zero explicit coordination between the SBSs, and runs in $O(N + K log K)$ time, where $N$ is the number of SBSs (caches) and $K$ the maximum number of videos. We extend CCA to an online setting where the video popularities are not known a priori but are estimated over time through a limited amount of periodic information sharing between SBSs. We demonstrate that our algorithm closely approaches the optimal integral caching solution as the cache size increases. Moreover, via simulations carried out on real video access traces, we show that our algorithm effectively uses the SBS caches to reduce the video delivery delay and conserve the remote servers bandwidth, and that it outperforms two other reference caching methods adapted to our system setting.
60 - Yana Qin , Danye Wu , Zhiwei Xu 2020
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can in natural leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
Mobile networks are experiencing tremendous increase in data volume and user density. An efficient technique to alleviate this issue is to bring the data closer to the users by exploiting the caches of edge network nodes, such as fixed or mobile access points and even user devices. Meanwhile, the fusion of machine learning and wireless networks offers a viable way for network optimization as opposed to traditional optimization approaches which incur high complexity, or fail to provide optimal solutions. Among the various machine learning categories, reinforcement learning operates in an online and autonomous manner without relying on large sets of historical data for training. In this survey, reinforcement learning-aided mobile edge caching is presented, aiming at highlighting the achieved network gains over conventional caching approaches. Taking into account the heterogeneity of sixth generation (6G) networks in various wireless settings, such as fixed, vehicular and flying networks, learning-aided edge caching is presented, departing from traditional architectures. Furthermore, a categorization according to the desirable performance metric, such as spectral, energy and caching efficiency, average delay, and backhaul and fronthaul offloading is provided. Finally, several open issues are discussed, targeting to stimulate further interest in this important research field.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا