No Arabic abstract
This letter studies a basic wireless caching network where a source server is connected to a cache-enabled base station (BS) that serves multiple requesting users. A critical problem is how to improve cache hit rate under dynamic content popularity. To solve this problem, the primary contribution of this work is to develop a novel dynamic content update strategy with the aid of deep reinforcement learning. Considering that the BS is unaware of content popularities, the proposed strategy dynamically updates the BS cache according to the time-varying requests and the BS cached contents. Towards this end, we model the problem of cache update as a Markov decision process and put forth an efficient algorithm that builds upon the long short-term memory network and external memory to enhance the decision making ability of the BS. Simulation results show that the proposed algorithm can achieve not only a higher average reward than deep Q-network, but also a higher cache hit rate than the existing replacement policies such as the least recently used, first-in first-out, and deep Q-network based algorithms.
We consider distributed caching of content across several small base stations (SBSs) in a wireless network, where the content is encoded using a maximum distance separable code. Specifically, we apply soft time-to-live (STTL) cache management policies, where coded packets may be evicted from the caches at periodic times. We propose a reinforcement learning (RL) approach to find coded STTL policies minimizing the overall network load. We demonstrate that such caching policies achieve almost the same network load as policies obtained through optimization, where the latter assumes perfect knowledge of the distribution of times between file requests as well the distribution of the number of SBSs within communication range of a user placing a request. We also suggest a multi-agent RL (MARL) framework for the scenario of non-uniformly distributed requests in space. For such a scenario, we show that MARL caching policies achieve lower network load as compared to optimized caching policies assuming a uniform request placement. We also provide convincing evidence that synchronous updates offer a lower network load than asynchronous updates for spatially homogeneous renewal request processes due to the memory of the renewal processes.
Fog Radio Access Network (F-RAN) architectures can leverage both cloud processing and edge caching for content delivery to the users. To this end, F-RAN utilizes caches at the edge nodes (ENs) and fronthaul links connecting a cloud processor to ENs. Assuming time-invariant content popularity, existing information-theoretic analyses of content delivery in F-RANs rely on offline caching with separate content placement and delivery phases. In contrast, this work focuses on the scenario in which the set of popular content is time-varying, hence necessitating the online replenishment of the ENs caches along with the delivery of the requested files. The analysis is centered on the characterization of the long-term Normalized Delivery Time (NDT), which captures the temporal dependence of the coding latencies accrued across multiple time slots in the high signal-to-noise ratio regime. Online edge caching and delivery schemes are investigated for both serial and pipelined transmission modes across fronthaul and edge segments. Analytical results demonstrate that, in the presence of a time-varying content popularity, the rate of fronthaul links sets a fundamental limit to the long-term NDT of F- RAN system. Analytical results are further verified by numerical simulation, yielding important design insights.
With the high development of wireless communication techniques, it is widely used in various fields for convenient and efficient data transmission. Different from commonly used assumption of the time-invariant wireless channel, we focus on the research on the time-varying wireless downlink channel to get close to the practical situation. Our objective is to gain the maximum value of sum rate in the time-varying channel under the some constraints about cut-off signal-to-interference and noise ratio (SINR), transmitted power and beamforming. In order to adapt the rapid changing channel, we abandon the frequently used algorithm convex optimization and deep reinforcement learning algorithms are used in this paper. From the view of the ordinary measures such as power control, interference incoordination and beamforming, continuous changes of measures should be put into consideration while sparse reward problem due to the abortion of episodes as an important bottleneck should not be ignored. Therefore, with the analysis of relevant algorithms, we proposed two algorithms, Deep Deterministic Policy Gradient algorithm (DDPG) and hierarchical DDPG, in our work. As for these two algorithms, in order to solve the discrete output, DDPG is established by combining the Actor-Critic algorithm with Deep Q-learning (DQN), so that it can output the continuous actions without sacrificing the existed advantages brought by DQN and also can improve the performance. Also, to address the challenge of sparse reward, we take advantage of meta policy from the idea of hierarchical theory to divide one agent in DDPG into one meta-controller and one controller as hierarchical DDPG. Our simulation results demonstrate that the proposed DDPG and hierarchical DDPG performs well from the views of coverage, convergence and sum rate performance.
Edge caching can effectively reduce backhaul burden at core network and increase quality-ofservice at wireless edge nodes. However, the beneficial role of edge caching cannot be fully realized when the offloading link is in deep fade. Fortunately, the impairments induced by wireless propagation environments could be renovated by a reconfigurable intelligent surface (RIS). In this paper, a new RIS-aided edge caching system is proposed, where a network cost minimization problem is formulated to optimize content placement at cache units, active beamforming at base station and passive phase shifting at RIS. After decoupling the content placement subproblem with hybrid beamforming design, we propose an alternating optimization algorithm to tackle the active beamforming and passive phase shifting. For active beamforming, we transform the problem into a semidefinite programming (SDP) and prove that the optimal solution of SDP is always rank-1. For passive phase shifting, we introduce block coordinate descent method to alternately optimize the auxiliary variables and the RIS phase shifts. Further, a conjugate gradient algorithm based on manifold optimization is proposed to deal with the non-convex unit-modulus constraints. Numerical results show that our RIS-aided edge caching design can effectively decrease the network cost in terms of backhaul capacity and power consumption.
In this paper, we study the resource allocation problem for a cooperative device-to-device (D2D)-enabled wireless caching network, where each user randomly caches popular contents to its memory and shares the contents with nearby users through D2D links. To enhance the throughput of spectrum sharing D2D links, which may be severely limited by the interference among D2D links, we enable the cooperation among some of the D2D links to eliminate the interference among them. We formulate a joint link scheduling and power allocation problem to maximize the overall throughput of cooperative D2D links (CDLs) and non-cooperative D2D links (NDLs), which is NP-hard. To solve the problem, we decompose it into two subproblems that maximize the sum rates of the CDLs and the NDLs, respectively. For CDL optimization, we propose a semi-orthogonal-based algorithm for joint user scheduling and power allocation. For NDL optimization, we propose a novel low-complexity algorithm to perform link scheduling and develop a Difference of Convex functions (D.C.) programming method to solve the non-convex power allocation problem. Simulation results show that the cooperative transmission can significantly increase both the number of served users and the overall system throughput.