ترغب بنشر مسار تعليمي؟ اضغط هنا

Simulation Results of User Behavior-Aware Scheduling Based on Time-Frequency Resource Conversion

70   0   0.0 ( 0 )
 نشر من قبل Hangguan Shan
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Integrating time-frequency resource conversion (TFRC), a new network resource allocation strategy, with call admission control can not only increase the cell capacity but also reduce network congestion effectively. However, the optimal setting of TFRC-oriented call admission control suffers from the curse of dimensionality, due to Markov chain-based optimization in a high-dimensional space. To address the scalability issue of TFRC, in [1] we extend the study of TFRC into the area of scheduling. Specifically, we study downlink scheduling based on TFRC for an LTE-type cellular network, to maximize service delivery. The service scheduling of interest is formulated as a joint request, channel and slot allocation problem which is NP-hard. An offline deflation and sequential fixing based algorithm (named DSFRB) with only polynomial-time complexity is proposed to solve the problem. For practical online implementation, two TFRC-enabled low-complexity algorithms, modified Smith ratio algorithm (named MSR) and modified exponential capacity algorithm (named MEC), are proposed as well. In this report, we present detailed numerical results of the proposed offline and online algorithms, which not only show the effectiveness of the proposed algorithms but also corroborate the advantages of the proposed TFRC-based schedule techniques in terms of quality-of-service (QoS) provisioning for each user and revenue improvement for a service operator.



قيم البحث

اقرأ أيضاً

Software-defined networking (SDN) provides an agile and programmable way to optimize radio access networks via a control-data plane separation. Nevertheless, reaping the benefits of wireless SDN hinges on making optimal use of the limited wireless fr onthaul capacity. In this work, the problem of fronthaul-aware resource allocation and user scheduling is studied. To this end, a two-timescale fronthaul-aware SDN control mechanism is proposed in which the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium in the long timescale. Subsequently, leveraging the controllers recommendations, each base station schedules its users using Lyapunov stochastic optimization in the short timescale, i.e., at each time slot. Simulation results show that significant network throughput enhancements and up to 40% latency reduction are achieved with the aid of the SDN controller. Moreover, the gains are more pronounced for denser network deployments.
Software-defined networking (SDN) is the concept of decoupling the control and data planes to create a flexible and agile network, assisted by a central controller. However, the performance of SDN highly depends on the limitations in the fronthaul wh ich are inadequately discussed in the existing literature. In this paper, a fronthaul-aware software-defined resource allocation mechanism is proposed for 5G wireless networks with in-band wireless fronthaul constraints. Considering the fronthaul capacity, the controller maximizes the time-averaged network throughput by enforcing a coarse correlated equilibrium (CCE) and incentivizing base stations (BSs) to locally optimize their decisions to ensure mobile users (MUs) quality-of-service (QoS) requirements. By marrying tools from Lyapunov stochastic optimization and game theory, we propose a two-timescale approach where the controller gives recommendations, i.e., sub-carriers with low interference, in a long-timescale whereas BSs schedule their own MUs and allocate the available resources in every time slot. Numerical results show considerable throughput enhancements and delay reductions over a non-SDN network baseline.
Edge computing is an emerging solution to support the future Internet of Things (IoT) applications that are delay-sensitive, processing-intensive or that require closer intelligence. Machine intelligence and data-driven approaches are envisioned to b uild future Edge-IoT systems that satisfy IoT devices demands for edge resources. However, significant challenges and technical barriers exist which complicate the resource management for such Edge-IoT systems. IoT devices running various applications can demonstrate a wide range of behaviors in the devices resource demand that are extremely difficult to manage. In addition, the management of multidimensional resources fairly and efficiently by the edge in such a setting is a challenging task. In this paper, we develop a novel data-driven resource management framework named BEHAVE that intelligently and fairly allocates edge resources to heterogeneous IoT devices with consideration of their behavior of resource demand (BRD). BEHAVE aims to holistically address the management technical barriers by: 1) building an efficient scheme for modeling and assessment of the BRD of IoT devices based on their resource requests and resource usage; 2) expanding a new Rational, Fair, and Truthful Resource Allocation (RFTA) model that binds the devices BRD and resource allocation to achieve fair allocation and encourage truthfulness in resource demand; and 3) developing an enhanced deep reinforcement learning (EDRL) scheme to achieve the RFTA goals. The evaluation results demonstrate BEHAVEs capability to analyze the IoT devices BRD and adjust its resource management policy accordingly.
With the proliferation of the Internet of Things (IoT) and the wide penetration of wireless networks, the surging demand for data communications and computing calls for the emerging edge computing paradigm. By moving the services and functions locate d in the cloud to the proximity of users, edge computing can provide powerful communication, storage, networking, and communication capacity. The resource scheduling in edge computing, which is the key to the success of edge computing systems, has attracted increasing research interests. In this paper, we survey the state-of-the-art research findings to know the research progress in this field. Specifically, we present the architecture of edge computing, under which different collaborative manners for resource scheduling are discussed. Particularly, we introduce a unified model before summarizing the current works on resource scheduling from three research issues, including computation offloading, resource allocation, and resource provisioning. Based on two modes of operation, i.e., centralized and distributed modes, different techniques for resource scheduling are discussed and compared. Also, we summarize the main performance indicators based on the surveyed literature. To shed light on the significance of resource scheduling in real-world scenarios, we discuss several typical application scenarios involved in the research of resource scheduling in edge computing. Finally, we highlight some open research challenges yet to be addressed and outline several open issues as the future research direction.
With the advantages of Millimeter wave in wireless communication network, the coverage radius and inter-site distance can be further reduced, the ultra dense network (UDN) becomes the mainstream of future networks. The main challenge faced by UDN is the serious inter-site interference, which needs to be carefully addressed by joint user association and resource allocation methods. In this paper, we propose a multi-agent Q-learning based method to jointly optimize the user association and resource allocation in UDN. The deep Q-network is applied to guarantee the convergence of the proposed method. Simulation results reveal the effectiveness of the proposed method and different performances under different simulation parameters are evaluated.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا