ترغب بنشر مسار تعليمي؟ اضغط هنا

Distributed Resource Allocation for Network Slicing of Bandwidth and Computational Resource

140   0   0.0 ( 0 )
 نشر من قبل Xiaohu Ge
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Network slicing has been considered as one of the key enablers for 5G to support diversified services and application scenarios. This paper studies the distributed network slicing utilizing both the spectrum resource offered by communication network and computational resources of a coexisting fog computing network. We propose a novel distributed framework based on a new control plane entity, regional orchestrator (RO), which can be deployed between base stations (BSs) and fog nodes to coordinate and control their bandwidth and computational resources. We propose a distributed resource allocation algorithm based on Alternating Direction Method of Multipliers with Partial Variable Splitting (DistADMM-PVS). We prove that the proposed algorithm can minimize the average latency of the entire network and at the same time guarantee satisfactory latency performance for every supported type of service. Simulation results show that the proposed algorithm converges much faster than some other existing algorithms. The joint network slicing with both bandwidth and computational resources can offer around 15% overall latency reduction compared to network slicing with only a single resource.



قيم البحث

اقرأ أيضاً

106 - Yingyu Li , Anqi Huang , Yong Xiao 2020
Network slicing has been considered as one of the key enablers for 5G to support diversified IoT services and application scenarios. This paper studies the distributed network slicing for a massive scale IoT network supported by 5G with fog computing . Multiple services with various requirements need to be supported by both spectrum resource offered by 5G network and computational resourc of the fog computing network. We propose a novel distributed framework based on a new control plane entity, federated-orchestrator , which can coordinate the spectrum and computational resources without requiring any exchange of the local data and resource information from BSs. We propose a distributed resource allocation algorithm based on Alternating Direction Method of Multipliers with Partial Variable Splitting . We prove DistADMM-PVS minimizes the average service response time of the entire network with guaranteed worst-case performance for all supported types of services when the coordination between the F-orchestrator and BSs is perfectly synchronized. Motivated by the observation that coordination synchronization may result in high coordination delay that can be intolerable when the network is large in scale, we propose a novel asynchronized ADMM algorithm. We prove that AsynADMM can converge to the global optimal solution with improved scalability and negligible coordination delay. We evaluate the performance of our proposed framework using two-month of traffic data collected in a in-campus smart transportation system supported by a 5G network. Extensive simulation has been conducted for both pedestrian and vehicular-related services during peak and non-peak hours. Our results show that the proposed framework offers significant reduction on service response time for both supported services, especially compared to network slicing with only a single resource.
One of the most important aspects of moving forward to the next generation networks like 5G/6G, is to enable network slicing in an efficient manner. The most challenging issues are the uncertainties in consumption and communication demand. Because th e slices arrive to the network in different times and their lifespans vary, the solution should dynamically react to online slice requests. The joint problem of online admission control and resource allocation considering the energy consumption is formulated mathematically. It is based on Integer Linear Programming (ILP), where, the $Gamma$- Robustness concept is exploited to overcome Virtual Links (VL) bandwidths and Virtual Network Functions (VNF) workloads uncertainties. Then, an optimal algorithm that adopts this mathematical model is proposed. To overcome the high computational complexity of ILP which is NP-hard, a new heuristic algorithm is developed. The assessments results indicate that the efficiency of heuristic is vital in increasing the accepted requests count, decreasing power consumption and providing adjustable tolerance vs. the VNFs workloads and VLs traffics uncertainties, separately. Considering the acceptance ratio and power consumption that constitute the two important components of the objective function, heuristic has about 7% and 12% optimality gaps, respectively, while being about 30X faster than that of optimal algorithm.
Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challengi ng technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.
To address the rising demand for strong packet delivery guarantees in networking, we study a novel way to perform graph resource allocation. We first introduce allocation graphs, in which nodes can independently set local resource limits based on phy sical constraints or policy decisions. In this scenario we formalize the distributed path-allocation (PAdist) problem, which consists in allocating resources to paths considering only local on-path information -- importantly, not knowing which other paths could have an allocation -- while at the same time achieving the global property of never exceeding available resources. Our core contribution, the global myopic allocation (GMA) algorithm, is a solution to this problem. We prove that GMA can compute unconditional allocations for all paths on a graph, while never over-allocating resources. Further, we prove that GMA is Pareto optimal with respect to the allocation size, and it has linear complexity in the input size. Finally, we show with simulations that this theoretical result could be indeed applied to practical scenarios, as the resulting path allocations are large enough to fit the requirements of practically relevant applications.
In the standard Mechanism Design framework, agents messages are gathered at a central point and allocation/tax functions are calculated in a centralized manner, i.e., as functions of all network agents messages. This requirement may cause communicati on and computation overhead and necessitates the design of mechanisms that alleviate this bottleneck. We consider a scenario where message transmission can only be performed locally so that the mechanism allocation/tax functions can be calculated in a decentralized manner. Each agent transmits messages to her local neighborhood, as defined by a given message-exchange network, and her allocation/tax functions are only functions of the available neighborhood messages. This scenario gives rise to a novel research problem that we call Distributed Mechanism Design. In this paper, we propose two distributed mechanisms for network utility maximization problems that involve private and public goods with competition and cooperation between agents. As a concrete example, we use the problems of rate allocation in networks with either unicast or multirate multicast transmission protocols. The proposed mechanism for each of the protocols fully implements the optimal allocation in Nash equilibria and its message space dimensionality scales linearly with respect to the number of agents in the network.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا