ترغب بنشر مسار تعليمي؟ اضغط هنا

GMA: A Pareto Optimal Distributed Resource-Allocation Algorithm

112   0   0.0 ( 0 )
 نشر من قبل Giacomo Giuliari
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

To address the rising demand for strong packet delivery guarantees in networking, we study a novel way to perform graph resource allocation. We first introduce allocation graphs, in which nodes can independently set local resource limits based on physical constraints or policy decisions. In this scenario we formalize the distributed path-allocation (PAdist) problem, which consists in allocating resources to paths considering only local on-path information -- importantly, not knowing which other paths could have an allocation -- while at the same time achieving the global property of never exceeding available resources. Our core contribution, the global myopic allocation (GMA) algorithm, is a solution to this problem. We prove that GMA can compute unconditional allocations for all paths on a graph, while never over-allocating resources. Further, we prove that GMA is Pareto optimal with respect to the allocation size, and it has linear complexity in the input size. Finally, we show with simulations that this theoretical result could be indeed applied to practical scenarios, as the resulting path allocations are large enough to fit the requirements of practically relevant applications.



قيم البحث

اقرأ أيضاً

We consider a resource allocation problem involving a large number of agents with individual constraints subject to privacy, and a central operator whose objective is to optimize a global, possibly nonconvex, cost while satisfying the agents constrai nts, for instance an energy operator in charge of the management of energy consumption flexibilities of many individual consumers. We provide a privacy-preserving algorithm that does compute the optimal allocation of resources, avoiding each agent to reveal her private information (constraints and individual solution profile) neither to the central operator nor to a third party. Our method relies on an aggregation procedure: we compute iteratively a global allocation of resources, and gradually ensure existence of a disaggregation, that is individual profiles satisfying agents private constraints, by a protocol involving the generation of polyhedral cuts and secure multiparty computations (SMC). To obtain these cuts, we use an alternate projection method, which is implemented locally by each agent, preserving her privacy needs. We adress especially the case in which the local and global constraints define a transportation polytope. Then, we provide theoretical convergence estimates together with numerical results, showing that the algorithm can be effectively used to solve the allocation problem in high dimension, while addressing privacy issues.
139 - Anqi Huang , Yingyu Li , Yong Xiao 2020
Network slicing has been considered as one of the key enablers for 5G to support diversified services and application scenarios. This paper studies the distributed network slicing utilizing both the spectrum resource offered by communication network and computational resources of a coexisting fog computing network. We propose a novel distributed framework based on a new control plane entity, regional orchestrator (RO), which can be deployed between base stations (BSs) and fog nodes to coordinate and control their bandwidth and computational resources. We propose a distributed resource allocation algorithm based on Alternating Direction Method of Multipliers with Partial Variable Splitting (DistADMM-PVS). We prove that the proposed algorithm can minimize the average latency of the entire network and at the same time guarantee satisfactory latency performance for every supported type of service. Simulation results show that the proposed algorithm converges much faster than some other existing algorithms. The joint network slicing with both bandwidth and computational resources can offer around 15% overall latency reduction compared to network slicing with only a single resource.
We study online resource allocation in a cloud computing platform, through a posted pricing mechanism: The cloud provider publishes a unit price for each resource type, which may vary over time; upon arrival at the cloud system, a cloud user either t akes the current prices, renting resources to execute its job, or refuses the prices without running its job there. We design pricing functions based on the current resource utilization ratios, in a wide array of demand-supply relationships and resource occupation durations, and prove worst-case competitive ratios of the pricing functions in terms of social welfare. In the basic case of a single-type, non-recycled resource (i.e., allocated resources are not later released for reuse), we prove that our pricing function design is optimal, in that any other pricing function can only lead to a worse competitive ratio. Insights obtained from the basic cases are then used to generalize the pricing functions to more realistic cloud systems with multiple types of resources, where a job occupies allocated resources for a number of time slots till completion, upon which time the resources are returned back to the cloud resource pool.
The recently created IETF 6TiSCH working group combines the high reliability and low-energy consumption of IEEE 802.15.4e Time Slotted Channel Hopping with IPv6 for industrial Internet of Things. We propose a distributed link scheduling algorithm, ca lled Local Voting, for 6TiSCH networks that adapts the schedule to the network conditions. The algorithm tries to equalize the link load (defined as the ratio of the queue length over the number of allocated cells) through cell reallocation. Local Voting calculates the number of cells to be added or released by the 6TiSCH Operation Sublayer (6top). Compared to a representative algorithm from the literature, Local Voting provides simultaneously high reliability and low end-to-end latency while consuming significantly less energy. Its performance has been examined and compared to On-the-fly algorithm in 6TiSCH simulator by modeling an industrial environment with 50 sensors.
In the last few years there has been significant growth in the area of wireless communication. IEEE 802.16/WiMAX is the network which is designed for providing high speed wide area broadband wireless access; WiMAX is an emerging wireless technology f or creating multi-hop Mesh network. Future generation networks will be characterized by variable and high data rates, Quality of Services (QoS), seamless mobility both within a network and between networks of different technologies and service providers. A technology is developed to accomplish these necessities is regular by IEEE, is 802.16, also called as WiMAX (Worldwide Interoperability for Microwave Access). This architecture aims to apply Long range connectivity, High data rates, High security, Low power utilization and Excellent Quality of Services and squat deployment costs to a wireless access technology on a metropolitan level. In this paper we have observed the performance analysis of location based resource allocation for WiMAX and WLAN-WiMAX client and in second phase we observed the rate-adaptive algorithms. We know that base station (BS) is observed the ranging first for all subscribers then established the link between them and in final phase they will allocate the resource with Subcarriers allocation according to the demand (UL) i.e. video, voice and data application. We propose linear approach, Active-Set optimization and Genetic Algorithm for Resource Allocation in downlink Mobile WiMAX networks. Purpose of proposed algorithms is to optimize total throughput. Simulation results show that Genetic Algorithm and Active-Set algorithm performs better than previous methods in terms of higher capacities but GA have high complexity then active set.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا