Do you want to publish a course? Click here

Distributed Learning Algorithms for Opportunistic Spectrum Access in Infrastructure-less Networks

144   0   0.0 ( 0 )
 Added by Rohit Kumar
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

An opportunistic spectrum access (OSA) for the infrastructure-less (or cognitive ad-hoc) network has received significant attention thanks to emerging paradigms such as the Internet of Things (IoTs) and smart grids. Research in this area has evolved from the r{ho}rand algorithm requiring prior knowledge of the number of active secondary users (SUs) to the musical chair (MC) algorithm where the number of SUs are unknown and estimated independently at each SU. These works ignore the number of collisions in the network leading to wastage of power and bring down the effective life of battery operated SUs. In this paper, we develop algorithms for OSA that learn faster and incurs fewer number of collisions i.e. energy efficient. We consider two types of infrastructure-less decentralized networks: 1) static network where the number of SUs are fixed but unknown, and 2) dynamic network where SUs can independently enter or leave the network. We set up the problem as a multi-player mult-armed bandit and develop two distributed algorithms. The analysis shows that when all the SUs independently implement the proposed algorithms, the loss in throughput compared to the optimal throughput, i.e. regret, is a constant with high probability and significantly outperforms existing algorithms both in terms of regret and number of collisions. Fewer collisions make them ideally suitable for battery operated SU terminals. We validate our claims through exhaustive simulated experiments as well as through a realistic USRP based experiments in a real radio environment.



rate research

Read More

111 - Chao Gan , Ruida Zhou , Jing Yang 2018
In this paper, we investigate cost-aware joint learning and optimization for multi-channel opportunistic spectrum access in a cognitive radio system. We investigate a discrete time model where the time axis is partitioned into frames. Each frame consists of a sensing phase, followed by a transmission phase. During the sensing phase, the user is able to sense a subset of channels sequentially before it decides to use one of them in the following transmission phase. We assume the channel states alternate between busy and idle according to independent Bernoulli random processes from frame to frame. To capture the inherent uncertainty in channel sensing, we assume the reward of each transmission when the channel is idle is a random variable. We also associate random costs with sensing and transmission actions. Our objective is to understand how the costs and reward of the actions would affect the optimal behavior of the user in both offline and online settings, and design the corresponding opportunistic spectrum access strategies to maximize the expected cumulative net reward (i.e., reward-minus-cost). We start with an offline setting where the statistics of the channel status, costs and reward are known beforehand. We show that the the optimal policy exhibits a recursive double threshold structure, and the user needs to compare the channel statistics with those thresholds sequentially in order to decide its actions. With such insights, we then study the online setting, where the statistical information of the channels, costs and reward are unknown a priori. We judiciously balance exploration and exploitation, and show that the cumulative regret scales in O(log T). We also establish a matched lower bound, which implies that our online algorithm is order-optimal. Simulation results corroborate our theoretical analysis.
Owing to the ever-increasing demand in wireless spectrum, Cognitive Radio (CR) was introduced as a technique to attain high spectral efficiency. As the number of secondary users (SUs) connecting to the cognitive radio network is on the rise, there is an imminent need for centralized algorithms that provide high throughput and energy efficiency of the SUs while ensuring minimum interference to the licensed users. In this work, we propose a multi-stage algorithm that - 1) effectively assigns the available channel to the SUs, 2) employs a non-parametric learning framework to estimate the primary traffic distribution to minimize sensing, and 3) proposes an adaptive framework to ensure that the collision to the primary user is below the specified threshold. We provide comprehensive empirical validation of the method with other approaches.
Spectrum sharing among users is a fundamental problem in the management of any wireless network. In this paper, we discuss the problem of distributed spectrum collaboration without central management under general unknown channels. Since the cost of communication, coordination and control is rapidly increasing with the number of devices and the expanding bandwidth used there is an obvious need to develop distributed techniques for spectrum collaboration where no explicit signaling is used. In this paper, we combine game-theoretic insights with deep Q-learning to provide a novel asymptotically optimal solution to the spectrum collaboration problem. We propose a deterministic distributed deep reinforcement learning(D3RL) mechanism using a deep Q-network (DQN). It chooses the channels using the Q-values and the channel loads while limiting the options available to the user to a few channels with the highest Q-values and among those, it selects the least loaded channel. Using insights from both game theory and combinatorial optimization we show that this technique is asymptotically optimal for large overloaded networks. The selected channel and the outcome of the successful transmission are fed back into the learning of the deep Q-network to incorporate it into the learning of the Q-values. We also analyzed performance to understand the behavior of D3RL in differ
137 - Wei Cui , Wei Yu 2020
This paper proposes a novel scalable reinforcement learning approach for simultaneous routing and spectrum access in wireless ad-hoc networks. In most previous works on reinforcement learning for network optimization, the network topology is assumed to be fixed, and a different agent is trained for each transmission node -- this limits scalability and generalizability. Further, routing and spectrum access are typically treated as separate tasks. Moreover, the optimization objective is usually a cumulative metric along the route, e.g., number of hops or delay. In this paper, we account for the physical-layer signal-to-interference-plus-noise ratio (SINR) in a wireless network and further show that bottleneck objective such as the minimum SINR along the route can also be optimized effectively using reinforcement learning. Specifically, we propose a scalable approach in which a single agent is associated with each flow and makes routing and spectrum access decisions as it moves along the frontier nodes. The agent is trained according to the physical-layer characteristics of the environment using a novel rewarding scheme based on the Monte Carlo estimation of the future bottleneck SINR. It learns to avoid interference by intelligently making joint routing and spectrum allocation decisions based on the geographical location information of the neighbouring nodes.
145 - Xiang Tan , Li Zhou , Haijun Wang 2021
With the development of the 5G and Internet of Things, amounts of wireless devices need to share the limited spectrum resources. Dynamic spectrum access (DSA) is a promising paradigm to remedy the problem of inefficient spectrum utilization brought upon by the historical command-and-control approach to spectrum allocation. In this paper, we investigate the distributed DSA problem for multi-user in a typical multi-channel cognitive radio network. The problem is formulated as a decentralized partially observable Markov decision process (Dec-POMDP), and we proposed a centralized off-line training and distributed on-line execution framework based on cooperative multi-agent reinforcement learning (MARL). We employ the deep recurrent Q-network (DRQN) to address the partial observability of the state for each cognitive user. The ultimate goal is to learn a cooperative strategy which maximizes the sum throughput of cognitive radio network in distributed fashion without coordination information exchange between cognitive users. Finally, we validate the proposed algorithm in various settings through extensive experiments. From the simulation results, we can observe that the proposed algorithm can converge fast and achieve almost the optimal performance.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا