ترغب بنشر مسار تعليمي؟ اضغط هنا

Coverage Hole Detection for mmWave Networks: An Unsupervised Learning Approach

176   0   0.0 ( 0 )
 نشر من قبل Chethan Kumar Anjinappa
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The utilization of millimeter-wave (mmWave) bands in 5G networks poses new challenges to network planning. Vulnerability to blockages at mmWave bands can cause coverage holes (CHs) in the radio environment, leading to radio link failure when a user enters these CHs. Detection of the CHs carries critical importance so that necessary remedies can be introduced to improve coverage. In this letter, we propose a novel approach to identify the CHs in an unsupervised fashion using a state-of-the-art manifold learning technique: uniform manifold approximation and projection. The key idea is to preserve the local-connectedness structure inherent in the collected unlabelled channel samples, such that the CHs from the service area are detectable. Our results on the DeepMIMO dataset scenario demonstrate that the proposed method can learn the structure within the data samples and provide visual holes in the low-dimensional embedding while preserving the CH boundaries. Once the CH boundary is determined in the low-dimensional embedding, channel-based localization techniques can be applied to these samples to obtain the geographical boundaries of the CHs.



قيم البحث

اقرأ أيضاً

Federated Learning (FL) is a distributed learning framework that can deal with the distributed issue in machine learning and still guarantee high learning performance. However, it is impractical that all users will sacrifice their resources to join t he FL algorithm. This motivates us to study the incentive mechanism design for FL. In this paper, we consider a FL system that involves one base station (BS) and multiple mobile users. The mobile users use their own data to train the local machine learning model, and then send the trained models to the BS, which generates the initial model, collects local models and constructs the global model. Then, we formulate the incentive mechanism between the BS and mobile users as an auction game where the BS is an auctioneer and the mobile users are the sellers. In the proposed game, each mobile user submits its bids according to the minimal energy cost that the mobile users experiences in participating in FL. To decide winners in the auction and maximize social welfare, we propose the primal-dual greedy auction mechanism. The proposed mechanism can guarantee three economic properties, namely, truthfulness, individual rationality and efficiency. Finally, numerical results are shown to demonstrate the performance effectiveness of our proposed mechanism.
Unmanned aerial vehicles (UAVs), as aerial base stations, are a promising solution for providing wireless communications, thanks to their high flexibility and autonomy. Moreover, emerging services, such as extended reality, require high-capacity comm unications. To achieve this, millimeter wave (mmWave), and recently, terahertz bands have been considered for UAV communications. However, communication at these high frequencies requires a line-of-sight (LoS) to the terminals, which may be located in 3D space and may have extremely limited direct-line-of-view (LoV) due to blocking objects, like buildings and trees. In this paper, we investigate the problem of determining 3D placement and orientation of UAVs such that users have guaranteed LoS coverage by at least one UAV and the signal-to-noise ratio (SNR) between the UAV-user pairs are maximized. We formulate the problem as an integer linear programming(ILP) problem and prove its NP-hardness. Next, we propose a low-complexity geometry-based greedy algorithm to solve the problem efficiently. Our simulation results show that the proposed algorithm (almost) always guarantees LoS coverage to all users in all considered simulation settings.
We consider a source that wishes to communicate with a destination at a desired rate, over a mmWave network where links are subject to blockage and nodes to failure (e.g., in a hostile military environment). To achieve resilience to link and node fai lures, we here explore a state-of-the-art Soft Actor-Critic (SAC) deep reinforcement learning algorithm, that adapts the information flow through the network, without using knowledge of the link capacities or network topology. Numerical evaluations show that our algorithm can achieve the desired rate even in dynamic environments and it is robust against blockage.
Splitting network computations between the edge device and a server enables low edge-compute inference of neural networks but might expose sensitive information about the test query to the server. To address this problem, existing techniques train th e model to minimize information leakage for a given set of sensitive attributes. In practice, however, the test queries might contain attributes that are not foreseen during training. We propose instead an unsupervised obfuscation method to discard the information irrelevant to the main task. We formulate the problem via an information theoretical framework and derive an analytical solution for a given distortion to the model output. In our method, the edge device runs the model up to a split layer determined based on its computational capacity. It then obfuscates the obtained feature vector based on the first layer of the server model by removing the components in the null space as well as the low-energy components of the remaining signal. Our experimental results show that our method outperforms existing techniques in removing the information of the irrelevant attributes and maintaining the accuracy on the target label. We also show that our method reduces the communication cost and incurs only a small computational overhead.
This paper presents DeepIA, a deep learning solution for faster and more accurate initial access (IA) in 5G millimeter wave (mmWave) networks when compared to conventional IA. By utilizing a subset of beams in the IA process, DeepIA removes the need for an exhaustive beam search thereby reducing the beam sweep time in IA. A deep neural network (DNN) is trained to learn the complex mapping from the received signal strengths (RSSs) collected with a reduced number of beams to the optimal spatial beam of the receiver (among a larger set of beams). In test time, DeepIA measures RSSs only from a small number of beams and runs the DNN to predict the best beam for IA. We show that DeepIA reduces the IA time by sweeping fewer beams and significantly outperforms the conventional IAs beam prediction accuracy in both line of sight (LoS) and non-line of sight (NLoS) mmWave channel conditions.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا