ترغب بنشر مسار تعليمي؟ اضغط هنا

Graph Embedding based Wireless Link Scheduling with Few Training Samples

166   0   0.0 ( 0 )
 نشر من قبل Mengyuan Lee
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Link scheduling in device-to-device (D2D) networks is usually formulated as a non-convex combinatorial problem, which is generally NP-hard and difficult to get the optimal solution. Traditional methods to solve this problem are mainly based on mathematical optimization techniques, where accurate channel state information (CSI), usually obtained through channel estimation and feedback, is needed. To overcome the high computational complexity of the traditional methods and eliminate the costly channel estimation stage, machine leaning (ML) has been introduced recently to address the wireless link scheduling problems. In this paper, we propose a novel graph embedding based method for link scheduling in D2D networks. We first construct a fully-connected directed graph for the D2D network, where each D2D pair is a node while interference links among D2D pairs are the edges. Then we compute a low-dimensional feature vector for each node in the graph. The graph embedding process is based on the distances of both communication and interference links, therefore without requiring the accurate CSI. By utilizing a multi-layer classifier, a scheduling strategy can be learned in a supervised manner based on the graph embedding results for each node. We also propose an unsupervised manner to train the graph embedding based method to further reinforce the scalability and generalizability and develop a K-nearest neighbor graph representation method to reduce the computational complexity. Extensive simulation demonstrates that the proposed method is near-optimal compared with the existing state-of-art methods but is with only hundreds of training samples. It is also competitive in terms of scalability and generalizability to more complicated scenarios.



قيم البحث

اقرأ أيضاً

Efficient scheduling of transmissions is a key problem in wireless networks. The main challenge stems from the fact that optimal link scheduling involves solving a maximum weighted independent set (MWIS) problem, which is known to be NP-hard. For pra ctical link scheduling schemes, centralized and distributed greedy heuristics are commonly used to approximate the solution to the MWIS problem. However, these greedy schemes mostly ignore important topological information of the wireless network. To overcome this limitation, we propose fast heuristics based on graph convolutional networks (GCNs) that can be implemented in centralized and distributed manners. Our centralized MWIS solver is based on tree search guided by a trainable GCN module and 1-step rollout. In our distributed MWIS solver, a trainable GCN module learns topology-aware node embeddings that are combined with the network weights before calling a distributed greedy solver. Test results on medium-sized wireless networks show that a GCN-based centralized MWIS solver can reach a near-optimal solution quickly. Moreover, we demonstrate that a shallow GCN-based distributed MWIS scheduler can reduce by nearly half the suboptimality gap of the distributed greedy solver with minimal increase in complexity. The proposed scheduling solutions also exhibit good generalizability across graph and weight distributions.
140 - Ruyi Zhang , Ziwei Yang , Zhi Yang 2021
Accuracy predictor is trained to predict the validation accuracy of an network from its architecture encoding. It can effectively assist in designing networks and improving Neural Architecture Search(NAS) efficiency. However, a high-performance predi ctor depends on adequate trainning samples, which requires unaffordable computation overhead. To alleviate this problem, we propose a novel framework to train an accuracy predictor under few training samples. The framework consists ofdata augmentation methods and an ensemble learning algorithm. The data augmentation methods calibrate weak labels and inject noise to feature space. The ensemble learning algorithm, termed cascade bagging, trains two-level models by sampling data and features. In the end, the advantages of above methods are proved in the Performance Prediciton Track of CVPR2021 1st Lightweight NAS Challenge. Our code is made public at: https://github.com/dlongry/Solutionto-CVPR2021-NAS-Track2.
Wireless communication systems operate in complex time-varying environments. Therefore, selecting the optimal configuration parameters in these systems is a challenging problem. For wireless links, emph{rate selection} is used to select the optimal d ata transmission rate that maximizes the link throughput subject to an application-defined latency constraint. We model rate selection as a stochastic multi-armed bandit (MAB) problem, where a finite set of transmission rates are modeled as independent bandit arms. For this setup, we propose Con-TS, a novel constrained version of the Thompson sampling algorithm, where the latency requirement is modeled by a high-probability linear constraint. We show that for Con-TS, the expected number of constraint violations over T transmission intervals is upper bounded by O(sqrt{KT}), where K is the number of available rates. Further, the expected loss in cumulative throughput compared to the optimal rate selection scheme (i.e., the egret is also upper bounded by O(sqrt{KT log K}). Through numerical simulations, we demonstrate that Con-TS significantly outperforms state-of-the-art bandit schemes for rate selection.
Last year, IEEE 802.11 Extremely High Throughput Study Group (EHT Study Group) was established to initiate discussions on new IEEE 802.11 features. Coordinated control methods of the access points (APs) in the wireless local area networks (WLANs) are discussed in EHT Study Group. The present study proposes a deep reinforcement learning-based channel allocation scheme using graph convolutional networks (GCNs). As a deep reinforcement learning method, we use a well-known method double deep Q-network. In densely deployed WLANs, the number of the available topologies of APs is extremely high, and thus we extract the features of the topological structures based on GCNs. We apply GCNs to a contention graph where APs within their carrier sensing ranges are connected to extract the features of carrier sensing relationships. Additionally, to improve the learning speed especially in an early stage of learning, we employ a game theory-based method to collect the training data independently of the neural network model. The simulation results indicate that the proposed method can appropriately control the channels when compared to extant methods.
Automated wireless spectrum monitoring across frequency, time and space will be essential for many future applications. Manual and fine-grained spectrum analysis is becoming impossible because of the large number of measurement locations and complexi ty of the spectrum use landscape. Detecting unexpected behaviors in the wireless spectrum from the collected data is a crucial part of this automated monitoring, and the control of detected anomalies is a key functionality to enable interaction between the automated system and the end user. In this paper we look into the wireless spectrum anomaly detection problem for crowdsourced sensors. We first analyze in detail the nature of these anomalies and design effective algorithms to bring the higher dimensional input data to a common feature space across sensors. Anomalies can then be detected as outliers in this feature space. In addition, we investigate the importance of user feedback in the anomaly detection process to improve the performance of unsupervised anomaly detection. Furthermore, schemes for generalizing user feedback across sensors are also developed to close the anomaly detection loop.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا