ترغب بنشر مسار تعليمي؟ اضغط هنا

Graph Balancing with Orientation Costs

123   0   0.0 ( 0 )
 نشر من قبل Ran Yeheskel
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Motivated by the classic Generalized Assignment Problem, we consider the Graph Balancing problem in the presence of orientation costs: given an undirected multi-graph G = (V,E) equipped with edge weights and orientation costs on the edges, the goal is to find an orientation of the edges that minimizes both the maximum weight of edges oriented toward any vertex (makespan) and total orientation cost. We present a general framework for minimizing makespan in the presence of costs that allows us to: (1) achieve bicriteria approximations for the Graph Balancing problem that capture known previous results (Shmoys-Tardos [Math. Progrm. 93], Ebenlendr-Krcal- Sgall [Algorithmica 14], and Wang-Sitters [Inf. Process. Lett. 16]); and (2) achieve bicriteria approximations for extensions of the Graph Balancing problem that admit hyperedges and unrelated weights. Our framework is based on a remarkably simple rounding of a strengthened linear relaxation. We complement the above by presenting bicriteria lower bounds with respect to the linear programming relaxations we use that show that a loss in the total orientation cost is required if one aims for an approximation better than 2 in the makespan.



قيم البحث

اقرأ أيضاً

In the graph balancing problem the goal is to orient a weighted undirected graph to minimize the maximum weighted in-degree. This special case of makespan minimization is NP-hard to approximate to a factor better than 3/2 even when there are only two types of edge weights. In this note we describe a simple 3/2 approximation for the graph balancing problem with two-edge types, settling this very special case of makespan minimization.
149 - Wei Liu , Li Chen , 2021
Decentralized federated learning (DFL) is a powerful framework of distributed machine learning and decentralized stochastic gradient descent (SGD) is a driving engine for DFL. The performance of decentralized SGD is jointly influenced by communicatio n-efficiency and convergence rate. In this paper, we propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance. The proposed framework performs both multiple local updates and multiple inter-node communications periodically, unifying traditional decentralized SGD methods. We establish strong convergence guarantees for the proposed DFL algorithm without the assumption of convex objective function. The balance of communication and computation rounds is essential to optimize decentralized federated learning under constrained communication and computation resources. For further improving communication-efficiency of DFL, compressed communication is applied to DFL, named DFL with compressed communication (C-DFL). The proposed C-DFL exhibits linear convergence for strongly convex objectives. Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods and show that C-DFL further enhances communication-efficiency.
Set function optimization is essential in AI and machine learning. We focus on a subadditive set function that generalizes submodularity, and examine the subadditivity of non-submodular functions. We also deal with a minimax subadditive load balancin g problem, and present a modularization-minimization algorithm that theoretically guarantees a worst-case approximation factor. In addition, we give a lower bound computation technique for the problem. We apply these methods to the multi-robot routing problem for an empirical performance evaluation.
Motivated by the use of high speed circuit switches in large scale data centers, we consider the problem of circuit switch scheduling. In this problem we are given demands between pairs of servers and the goal is to schedule at every time step a matc hing between the servers while maximizing the total satisfied demand over time. The crux of this scheduling problem is that once one shifts from one matching to a different one a fixed delay $delta$ is incurred during which no data can be transmitted. For the offline version of the problem we present a $(1-frac{1}{e}-epsilon)$ approximation ratio (for any constant $epsilon >0$). Since the natural linear programming relaxation for the problem has an unbounded integrality gap, we adopt a hybrid approach that combines the combinatorial greedy with randomized rounding of a different suitable linear program. For the online version of the problem we present a (bi-criteria) $ ((e-1)/(2e-1)-epsilon)$-competitive ratio (for any constant $epsilon >0$ ) that exceeds time by an additive factor of $O(frac{delta}{epsilon})$. We note that no uni-criteria online algorithm is possible. Surprisingly, we obtain the result by reducing the online version to the offline one.
We give the first polynomial-time approximation scheme (PTAS) for the stochastic load balancing problem when the job sizes follow Poisson distributions. This improves upon the 2-approximation algorithm due to Goel and Indyk (FOCS99). Moreover, our ap proximation scheme is an efficient PTAS that has a running time double exponential in $1/epsilon$ but nearly-linear in $n$, where $n$ is the number of jobs and $epsilon$ is the target error. Previously, a PTAS (not efficient) was only known for jobs that obey exponential distributions (Goel and Indyk, FOCS99). Our algorithm relies on several probabilistic ingredients including some (seemingly) new results on scaling and the so-called focusing effect of maximum of Poisson random variables which might be of independent interest.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا