تلعب الخوارزميات الطريق الأقصر دوراً حاسماً في القرن الماضي، مسلحة الطريق لنظم GPS الحديثة للعثور على طرق أفضل على النظم الثابتة في نصف ثانية. واحدة من تطبيقات هذه الخوارزميات هي تحسين المسافة الكلية لأسطوانات الكهرباء (بشكل خاص في تكوينات النجم). وبسبب أهمية اكتشاف النظم الكهربائية المتصلة بشكل جيد في بعض المناطق، فإن العثور على طريق أقصر يمكنه الحساب للخصائص الجيولوجية سيؤثر بشكل كبير في خفض تكلفة نقل الطاقة الكهربائية. نبدأ ببحثنا بالإثبات من الحجر المحيط كآلية حد سارية فعالة لخوارزميات الطريق الأقصر في النجم. من خلال هذا الحد، نقترح الخوارزميات الجديدة لإدارة بعض الحالات التي لا تحتوي على طرق موجودة (المناطق المثقوبة والعقبات) عن طريق تقسيم الفضاء الإيوكليدي إلى مربعات ودمج الخوارزميات الموجودة بالفعل التي تحسب الأدنى المحلي الذي نعتقد أنه يمكن أن يكون الحد المطلق. كما نحدد طرقاً لتقييم التكرارات اللازمة للوصول إلى مستوى معين من الدقة. كلا من هذه الخوارزميات الجديدة تلبي نواحي معينة التي لم تغطها الأدب السابق.
Shortest path algorithms have played a key role in the past century, paving the way for modern day GPS systems to find optimal routes along static systems in fractions of a second. One application of these algorithms includes optimizing the total distance of power lines (specifically in star topological configurations). Due to the relevancy of discovering well-connected electrical systems in certain areas, finding a minimum path that is able to account for geological features would have far-reaching consequences in lowering the cost of electric power transmission. We initialize our research by proving the convex hull as an effective bounding mechanism for star topological minimum path algorithms. Building off this bounding, we propose novel algorithms to manage certain cases that lack existing methods (weighted regions and obstacles) by discretizing Euclidean space into squares and combining pre-existing algorithms that calculate local minimums that we believe have a possibility of being the absolute minimum. We further designate ways to evaluate iterations necessary to reach some level of accuracy. Both of these novel algorithms fulfill certain niches that past literature does not cover.
We consider cost constrain
$ $In many optimization problems, a feasible solution induces a multi-dimensional cost vector. For example, in load-balancing a schedule induces a load vector across the machines. In $k$-clustering, opening $k$ facilities induces an assignment cost vector across the clients. In this paper we consider the following minimum norm optimization problem : Given an arbitrary monotone, symmetric norm, find a solution which minimizes the norm of the induced cost-vector. This generalizes many fundamental NP-hard problems. We give a general framework to tackle the minimum norm problem, and illustrate its efficacy in the unrelated machine load balancing and $k$-clustering setting. Our concrete results are the following. $bullet$ We give constant factor approximation algorithms for the minimum norm load balancing problem in unrelated machines, and the minimum norm $k$-clustering problem. To our knowledge, our results constitute the first constant-factor approximations for such a general suite of objectives. $bullet$ In load balancing with unrelated machines, we give a $2$-approximation for the problem of finding an assignment minimizing the sum of the largest $ell$ loads, for any $ell$. We give a $(2+varepsilon)$-approximation for the so-called ordered load-balancing problem. $bullet$ For $k$-clustering, we give a $(5+varepsilon)$-approximation for the ordered $k$-median problem significantly improving the constant factor approximations from Byrka, Sornat, and Spoerhase (STOC 2018) and Chakrabarty and Swamy (ICALP 2018). $bullet$ Our techniques also imply $O(1)$ approximations to the best simultaneous optimization factor for any instance of the unrelated machine load-balancing and the $k$-clustering setting. To our knowledge, these are the first positive simultaneous optimization results in these settings.
The minimum degree spanning tree (MDST) problem requires the construction of a spanning tree $T$ for graph $G=(V,E)$ with $n$ vertices, such that the maximum degree $d$ of $T$ is the smallest among all spanning trees of $G$. In this paper, we present two new distributed approximation algorithms for the MDST problem. Our first result is a randomized distributed algorithm that constructs a spanning tree of maximum degree $hat d = O(dlog{n})$. It requires $O((D + sqrt{n}) log^2 n)$ rounds (w.h.p.), where $D$ is the graph diameter, which matches (within log factors) the optimal round complexity for the related minimum spanning tree problem. Our second result refines this approximation factor by constructing a tree with maximum degree $hat d = O(d + log{n})$, though at the cost of additional polylogarithmic factors in the round complexity. Although efficient approximation algorithms for the MDST problem have been known in the sequential setting since the 1990s, our results are first efficient distributed solutions for this problem.
This paper gives poly-logarithmic-round, distributed D-approximation algorithms for covering problems with submodular cost and monotone covering constraints (Submodular-cost Covering). The approximation ratio D is the maximum number of variables in any constraint. Special cases include Covering Mixed Integer Linear Programs (CMIP), and Weighted Vertex Cover (with D=2). Via duality, the paper also gives poly-logarithmic-round, distributed D-approximation algorithms for Fractional Packing linear programs (where D is the maximum number of constraints in which any variable occurs), and for Max Weighted c-Matching in hypergraphs (where D is the maximum size of any of the hyperedges; for graphs D=2). The paper also gives parallel (RNC) 2-approximation algorithms for CMIP with two variables per constraint and Weighted Vertex Cover. The algorithms are randomized. All of the approximation ratios exactly match those of comparable centralized algorithms.
In this paper, we initiate the study of the weighted paging problem with predictions. This continues the recent line of work in online algorithms with predictions, particularly that of Lykouris and Vassilvitski (ICML 2018) and Rohatgi (SODA 2020) on unweighted paging with predictions. We show that unlike unweighted paging, neither a fixed lookahead nor knowledge of the next request for every page is sufficient information for an algorithm to overcome existing lower bounds in weighted paging. However, a combination of the two, which we call the strong per request prediction (SPRP) model, suffices to give a 2-competitive algorithm. We also explore the question of gracefully degrading algorithms with increasing prediction error, and give both upper and lower bounds for a set of natural measures of prediction error.