No Arabic abstract
For ultra-dense networks with wireless backhaul, caching strategy at small base stations (SBSs), usually with limited storage, is critical to meet massive high data rate requests. Since the content popularity profile varies with time in an unknown way, we exploit reinforcement learning (RL) to design a cooperative caching strategy with maximum-distance separable (MDS) coding. We model the MDS coding based cooperative caching as a Markov decision process to capture the popularity dynamics and maximize the long-term expected cumulative traffic load served directly by the SBSs without accessing the macro base station. For the formulated problem, we first find the optimal solution for a small-scale system by embedding the cooperative MDS coding into Q-learning. To cope with the large-scale case, we approximate the state-action value function heuristically. The approximated function includes only a small number of learnable parameters and enables us to propose a fast and efficient action-selection approach, which dramatically reduces the complexity. Numerical results verify the optimality/near-optimality of the proposed RL based algorithms and show the superiority compared with the baseline schemes. They also exhibit good robustness to different environments.
We consider distributed caching of content across several small base stations (SBSs) in a wireless network, where the content is encoded using a maximum distance separable code. Specifically, we apply soft time-to-live (STTL) cache management policies, where coded packets may be evicted from the caches at periodic times. We propose a reinforcement learning (RL) approach to find coded STTL policies minimizing the overall network load. We demonstrate that such caching policies achieve almost the same network load as policies obtained through optimization, where the latter assumes perfect knowledge of the distribution of times between file requests as well the distribution of the number of SBSs within communication range of a user placing a request. We also suggest a multi-agent RL (MARL) framework for the scenario of non-uniformly distributed requests in space. For such a scenario, we show that MARL caching policies achieve lower network load as compared to optimized caching policies assuming a uniform request placement. We also provide convincing evidence that synchronous updates offer a lower network load than asynchronous updates for spatially homogeneous renewal request processes due to the memory of the renewal processes.
In an Ultra-dense network (UDN) where there are more base stations (BSs) than active users, it is possible that many BSs are instantaneously left idle. Thus, how to utilize these dormant BSs by means of cooperative transmission is an interesting question. In this paper, we investigate the performance of a UDN with two types of cooperation schemes: non-coherent joint transmission (JT) without channel state information (CSI) and coherent JT with full CSI knowledge. We consider a bounded dual-slope path loss model to describe UDN environments where a user has several BSs in the near-field and the rest in the far-field. Numerical results show that non-coherent JT cannot improve the user spectral efficiency (SE) due to the simultaneous increment in signal and interference powers. For coherent JT, the achievable SE gain depends on the range of near-field, the relative densities of BSs and users, and the CSI accuracy. Finally, we assess the energy efficiency (EE) of cooperation in UDN. Despite costing extra energy consumption, cooperation can still improve EE under certain conditions.
In this paper, cooperative caching is investigated in fog radio access networks (F-RAN). To maximize the offloaded traffic, cooperative caching optimization problem is formulated. By analyzing the relationship between clustering and cooperation and utilizing the solutions of the knapsack problems, the above challenging optimization problem is transformed into a clustering subproblem and a content placement subproblem. To further reduce complexity, we propose an effective graph-based approach to solve the two subproblems. In the graph-based clustering approach, a node graph and a weighted graph are constructed. By setting the weights of the vertices of the weighted graph to be the incremental offloaded traffics of their corresponding complete subgraphs, the objective cluster sets can be readily obtained by using an effective greedy algorithm to search for the max-weight independent subset. In the graph-based content placement approach, a redundancy graph is constructed by removing the edges in the complete subgraphs of the node graph corresponding to the obtained cluster sets. Furthermore, we enhance the caching decisions to ensure each duplicate file is cached only once. Compared with traditional approximate solutions, our proposed graph-based approach has lower complexity. Simulation results show remarkable improvements in terms of offloaded traffic by using our proposed approach.
In a traditional $(H, r)$ combination network, each user is connected to a unique set of $r$ relays. However, few research efforts to consider $(H, r, u)$ multiaccess combination network problem where each $u$ users are connected to a unique set of $r$ relays. A naive strategy to obtain a coded caching scheme for $(H, r, u)$ multiaccess combination network is by $u$ times repeated application of a coded caching scheme for a traditional $(H, r)$ combination network. Obviously, the transmission load for each relay of this trivial scheme is exactly $u$ times that of the original scheme, which implies that as the number of users multiplies, the transmission load for each relay will also multiply. Therefore, it is very meaningful to design a coded caching scheme for $(H, r, u)$ multiaccess combination network with lower transmission load for each relay. In this paper, by directly applying the well known coding method (proposed by Zewail and Yener) for $(H, r)$ combination network, a coded caching scheme (ZY scheme) for $(H, r, u)$ multiaccess combination network is obtained. However, the subpacketization of this scheme has exponential order with the number of users, which leads to a high implementation complexity. In order to reduce the subpacketization, a direct construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed by means of Combinational Design Theory, where the parameter $u$ must be a combinatorial number. For arbitrary parameter $u$, a hybrid construction of a coded caching scheme for $(H, r, u)$ multiaccess combination network is proposed based on our direct construction. Theoretical and numerical analysis show that our last two schemes have smaller transmission load for each relay compared with the trivial scheme, and have much lower subpacketization compared with ZY scheme.
In this paper, the cooperative caching problem in fog radio access networks (F-RAN) is investigated. To maximize the incremental offloaded traffic, we formulate the clustering optimization problem with the consideration of cooperative caching and local content popularity, which falls into the scope of combinatorial programming. % and is NP-hard. We then propose an effective graph-based approach to solve this challenging problem. Firstly, a node graph is constructed with its vertex set representing the considered fog access points (F-APs) and its edge set reflecting the potential cooperations among the F-APs. %whether the F-APs the distance and load difference among the F-APs. Then, by exploiting the adjacency table of each vertex of the node graph, we propose to get the complete subgraphs through indirect searching for the maximal complete subgraphs for the sake of a reduced searching complexity. Furthermore, by using the complete subgraphs so obtained, a weighted graph is constructed. By setting the weights of the vertices of the weighted graph to be the incremental offloaded traffics of their corresponding complete subgraphs, the original clustering optimization problem can be transformed into an equivalent 0-1 integer programming problem. The max-weight independent subset of the vertex set of the weighted graph, which is equivalent to the objective cluster sets, can then be readily obtained by solving the above optimization problem through the greedy algorithm that we propose. Our proposed graph-based approach has an apparently low complexity in comparison with the brute force approach which has an exponential complexity. Simulation results show the remarkable improvements in terms of offloading gain by using our proposed approach.