ترغب بنشر مسار تعليمي؟ اضغط هنا

Communication-efficient Distributed Multi-resource Allocation

76   0   0.0 ( 0 )
 نشر من قبل Syed Eqbal Alam
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In several smart city applications, multiple resources must be allocated among competing agents that are coupled through such shared resources and are constrained --- either through limitations of communication infrastructure or privacy considerations. We propose a distributed algorithm to solve such distributed multi-resource allocation problems with no direct inter-agent communication. We do so by extending a recently introduced additive-increase multiplicative-decrease (AIMD) algorithm, which only uses very little communication between the system and agents. Namely, a control unit broadcasts a one-bit signal to agents whenever one of the allocated resources exceeds capacity. Agents then respond to this signal in a probabilistic manner. In the proposed algorithm, each agent makes decision of its resource demand locally and an agent is unaware of the resource allocation of other agents. In empirical results, we observe that the average allocations converge over time to optimal allocations.

قيم البحث

اقرأ أيضاً

We propose a distributed algorithm to solve a special distributed multi-resource allocation problem with no direct inter-agent communication. We do so by extending a recently introduced additive-increase multiplicative-decrease (AIMD) algorithm, whic h only uses very little communication between the system and agents. Namely, a control unit broadcasts a one-bit signal to agents whenever one of the allocated resources exceeds capacity. Agents then respond to this signal in a probabilistic manner. In the proposed algorithm, each agent is unaware of the resource allocation of other agents. We also propose a version of the AIMD algorithm for multiple binary resources (e.g., parking spaces). Binary resources are indivisible unit-demand resources, and each agent either allocated one unit of the resource or none. In empirical results, we observe that in both cases, the average allocations converge over time to optimal allocations.
Communication remains the most significant bottleneck in the performance of distributed optimization algorithms for large-scale machine learning. In this paper, we propose a communication-efficient framework, CoCoA, that uses local computation in a p rimal-dual setting to dramatically reduce the amount of necessary communication. We provide a strong convergence rate analysis for this class of algorithms, as well as experiments on real-world distributed datasets with implementations in Spark. In our experiments, we find that as compared to state-of-the-art mini-bat
Network-distributed optimization has attracted significant attention in recent years due to its ever-increasing applications. However, the classic decentralized gradient descent (DGD) algorithm is communication-inefficient for large-scale and high-di mensional network-distributed optimization problems. To address this challenge, many compressed DGD-based algorithms have been proposed. However, most of the existing works have high complexity and assume compressors with bounded noise power. To overcome these limitations, in this paper, we propose a new differential-coded compressed DGD (DC-DGD) algorithm. The key features of DC-DGD include: i) DC-DGD works with general SNR-constrained compressors, relaxing the bounded noise power assumption; ii) The differential-coded design entails the same convergence rate as the original DGD algorithm; and iii) DC-DGD has the same low-complexity structure as the original DGD due to a {em self-noise-reduction effect}. Moreover, the above features inspire us to develop a hybrid compression scheme that offers a systematic mechanism to minimize the communication cost. Finally, we conduct extensive experiments to verify the efficacy of the proposed DC-DGD and hybrid compressor.
In distributed machine learning, data is dispatched to multiple machines for processing. Motivated by the fact that similar data points often belong to the same or similar classes, and more generally, classification rules of high accuracy tend to be locally simple but globally complex (Vapnik & Bottou 1993), we propose data dependent dispatching that takes advantage of such structure. We present an in-depth analysis of this model, providing new algorithms with provable worst-case guarantees, analysis proving existing scalable heuristics perform well in natural non worst-case conditions, and techniques for extending a dispatching rule from a small sample to the entire distribution. We overcome novel technical challenges to satisfy important conditions for accurate distributed learning, including fault tolerance and balancedness. We empirically compare our approach with baselines based on random partitioning, balanced partition trees, and locality sensitive hashing, showing that we achieve significantly higher accuracy on both synthetic and real world image and advertising datasets. We also demonstrate that our technique strongly scales with the available computing power.
The $alpha$-fair resource allocation problem has received remarkable attention and has been studied in numerous application fields. Several algorithms have been proposed in the context of $alpha$-fair resource sharing to distributively compute its va lue. However, little work has been done on its structural properties. In this work, we present a lower bound for the optimal solution of the weighted $alpha$-fair resource allocation problem and compare it with existing propositions in the literature. Our derivations rely on a localization property verified by optimization problems with separable objective that permit one to better exploit their local structures. We give a local version of the well-known midpoint domination axiom used to axiomatically build the Nash Bargaining Solution (or proportionally fair resource allocation problem). Moreover, we show how our lower bound can improve the performances of a distributed algorithm based on the Alternating Directions Method of Multipliers (ADMM). The evaluation of the algorithm shows that our lower bound can considerably reduce its convergence time up to two orders of magnitude compared to when the bound is not used at all or is simply looser.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا