ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning-Based Joint User-AP Association and Resource Allocation in Ultra Dense Network

93   0   0.0 ( 0 )
 نشر من قبل Zhipeng Cheng
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

With the advantages of Millimeter wave in wireless communication network, the coverage radius and inter-site distance can be further reduced, the ultra dense network (UDN) becomes the mainstream of future networks. The main challenge faced by UDN is the serious inter-site interference, which needs to be carefully addressed by joint user association and resource allocation methods. In this paper, we propose a multi-agent Q-learning based method to jointly optimize the user association and resource allocation in UDN. The deep Q-network is applied to guarantee the convergence of the proposed method. Simulation results reveal the effectiveness of the proposed method and different performances under different simulation parameters are evaluated.

قيم البحث

اقرأ أيضاً

Heterogeneous Ultra-Dense Network (HUDN) is one of the vital networking architectures due to its ability to enable higher connectivity density and ultra-high data rates. Rational user association and power control schedule in HUDN can reduce wireless interference. This paper proposes a novel idea for resolving the joint user association and power control problem: the optimal user association and Base Station transmit power can be represented by channel information. Then, we solve this problem by formulating an optimal representation function. We model the HUDNs as a heterogeneous graph and train a Graph Neural Network (GNN) to approach this representation function by using semi-supervised learning, in which the loss function is composed of the unsupervised part that helps the GNN approach the optimal representation function and the supervised part that utilizes the previous experience to reduce useless exploration. We separate the learning process into two parts, the generalization-representation learning (GRL) part and the specialization-representation learning (SRL) part, which train the GNN for learning representation for generalized scenario quasi-static user distribution scenario, respectively. Simulation results demonstrate that the proposed GRL-based solution has higher computational efficiency than the traditional optimization algorithm, and the performance of SRL outperforms the GRL.
Software-defined networking (SDN) is the concept of decoupling the control and data planes to create a flexible and agile network, assisted by a central controller. However, the performance of SDN highly depends on the limitations in the fronthaul wh ich are inadequately discussed in the existing literature. In this paper, a fronthaul-aware software-defined resource allocation mechanism is proposed for 5G wireless networks with in-band wireless fronthaul constraints. Considering the fronthaul capacity, the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium (CCE) and incentivizing base stations (BSs) to locally optimize their decisions to ensure mobile users (MUs) quality-of-service (QoS) requirements. By marrying tools from Lyapunov stochastic optimization and game theory, we propose a two-timescale approach where the controller gives recommendations, i.e., sub-carriers with low interference, in a long-timescale whereas BSs schedule their own MUs and allocate the available resources in every time slot. Numerical results show considerable throughput enhancements and delay reductions over a non-SDN network baseline.
139 - Anqi Huang , Yingyu Li , Yong Xiao 2020
Network slicing has been considered as one of the key enablers for 5G to support diversified services and application scenarios. This paper studies the distributed network slicing utilizing both the spectrum resource offered by communication network and computational resources of a coexisting fog computing network. We propose a novel distributed framework based on a new control plane entity, regional orchestrator (RO), which can be deployed between base stations (BSs) and fog nodes to coordinate and control their bandwidth and computational resources. We propose a distributed resource allocation algorithm based on Alternating Direction Method of Multipliers with Partial Variable Splitting (DistADMM-PVS). We prove that the proposed algorithm can minimize the average latency of the entire network and at the same time guarantee satisfactory latency performance for every supported type of service. Simulation results show that the proposed algorithm converges much faster than some other existing algorithms. The joint network slicing with both bandwidth and computational resources can offer around 15% overall latency reduction compared to network slicing with only a single resource.
Software-defined networking (SDN) provides an agile and programmable way to optimize radio access networks via a control-data plane separation. Nevertheless, reaping the benefits of wireless SDN hinges on making optimal use of the limited wireless fr onthaul capacity. In this work, the problem of fronthaul-aware resource allocation and user scheduling is studied. To this end, a two-timescale fronthaul-aware SDN control mechanism is proposed in which the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium in the long timescale. Subsequently, leveraging the controllers recommendations, each base station schedules its users using Lyapunov stochastic optimization in the short timescale, i.e., at each time slot. Simulation results show that significant network throughput enhancements and up to 40% latency reduction are achieved with the aid of the SDN controller. Moreover, the gains are more pronounced for denser network deployments.
In the last few years there has been significant growth in the area of wireless communication. IEEE 802.16/WiMAX is the network which is designed for providing high speed wide area broadband wireless access; WiMAX is an emerging wireless technology f or creating multi-hop Mesh network. Future generation networks will be characterized by variable and high data rates, Quality of Services (QoS), seamless mobility both within a network and between networks of different technologies and service providers. A technology is developed to accomplish these necessities is regular by IEEE, is 802.16, also called as WiMAX (Worldwide Interoperability for Microwave Access). This architecture aims to apply Long range connectivity, High data rates, High security, Low power utilization and Excellent Quality of Services and squat deployment costs to a wireless access technology on a metropolitan level. In this paper we have observed the performance analysis of location based resource allocation for WiMAX and WLAN-WiMAX client and in second phase we observed the rate-adaptive algorithms. We know that base station (BS) is observed the ranging first for all subscribers then established the link between them and in final phase they will allocate the resource with Subcarriers allocation according to the demand (UL) i.e. video, voice and data application. We propose linear approach, Active-Set optimization and Genetic Algorithm for Resource Allocation in downlink Mobile WiMAX networks. Purpose of proposed algorithms is to optimize total throughput. Simulation results show that Genetic Algorithm and Active-Set algorithm performs better than previous methods in terms of higher capacities but GA have high complexity then active set.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا