ترغب بنشر مسار تعليمي؟ اضغط هنا

Throughput Optimal and Fast Near-Optimal Scheduling with Heterogeneously Delayed Network-State Information (Extended Version)

139   0   0.0 ( 0 )
 نشر من قبل Srinath Narasimha
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of distributed scheduling in wireless networks where heterogeneously delayed information about queue lengths and channel states of all links are available at all the transmitters. In an earlier work (by Reddy et al. in Queueing Systems, 2012), a throughput optimal scheduling policy (which we refer to henceforth as the R policy) for this setting was proposed. We study the R policy, and examine its two drawbacks -- (i) its huge computational complexity, and (ii) its non-optimal average per-packet queueing delay. We show that the R policy unnecessarily constrains itself to work with information that is more delayed than that afforded by the system. We propose a new policy that fully exploits the commonly available information, thereby greatly improving upon the computational complexity and the delay performance of the R policy. We show that our policy is throughput optimal. Our main contribution in this work is the design of two fast and near-throughput-optimal policies for this setting, whose explicit throughput and runtime performances we characterize analytically. While the R policy takes a few milliseconds to several tens of seconds to compute the schedule once (for varying number of links in the network), the running times of the proposed near-throughput-optimal algorithms range from a few microseconds to only a few hundred microseconds, and are thus suitable for practical implementation in networks with heterogeneously delayed information.



قيم البحث

اقرأ أيضاً

The coflow scheduling problem has emerged as a popular abstraction in the last few years to study data communication problems within a data center. In this basic framework, each coflow has a set of communication demands and the goal is to schedule ma ny coflows in a manner that minimizes the total weighted completion time. A coflow is said to complete when all its communication needs are met. This problem has been extremely well studied for the case of complete bipartite graphs that model a data center with full bisection bandwidth and several approximation algorithms and effective heuristics have been proposed recently. In this work, we study a slightly different model of coflow scheduling in general graphs (to capture traffic between data centers) and develop practical and efficient approximation algorithms for it. Our main result is a randomized 2 approximation algorithm for the single path and free path model, significantly improving prior work. In addition, we demonstrate via extensive experiments that the algorithm is practical, easy to implement and performs well in practice.
In this paper, we investigate the combination of synthesis, model-based learning, and online sampling techniques to obtain safe and near-optimal schedulers for a preemptible task scheduling problem. Our algorithms can handle Markov decision processes (MDPs) that have 1020 states and beyond which cannot be handled with state-of-the art probabilistic model-checkers. We provide probably approximately correct (PAC) guarantees for learning the model. Additionally, we extend Monte-Carlo tree search with advice, computed using safety games or obtained using the earliest-deadline-first scheduler, to safely explore the learned model online. Finally, we implemented and compared our algorithms empirically against shielded deep Q-learning on large task systems.
We consider the problem of efficient packet dissemination in wireless networks with point-to-multi-point wireless broadcast channels. We propose a dynamic policy, which achieves the broadcast capacity of the network. This policy is obtained by first transforming the original multi-hop network into a precedence-relaxed virtual single-hop network and then finding an optimal broadcast policy for the relaxed network. The resulting policy is shown to be throughput-optimal for the original wireless network using a sample-path argument. We also prove the NP-completeness of the finite-horizon broadcast problem, which is in contrast with the polynomial time solvability of the problem with point-to-point channels. Illustrative simulation results demonstrate the efficacy of the proposed broadcast policy in achieving the full broadcast capacity with low delay.
We study how to design edge server placement and server scheduling policies under workload uncertainty for 5G networks. We introduce a new metric called resource pooling factor to handle unexpected workload bursts. Maximizing this metric offers a str ong enhancement on top of robust optimization against workload uncertainty. Using both real traces and synthetic traces, we show that the proposed server placement and server scheduling policies not only demonstrate better robustness against workload uncertainty than existing approaches, but also significantly reduce the cost of service providers. Specifically, in order to achieve close-to-zero workload rejection rate, the proposed server placement policy reduces the number of required edge servers by about 25% compared with the state-of-the-art approach; the proposed server scheduling policy reduces the energy consumption of edge servers by about 13% without causing much impact on the service quality.
This paper provides three nearly-optimal algorithms for scheduling $t$ jobs in the $mathsf{CLIQUE}$ model. First, we present a deterministic scheduling algorithm that runs in $O(mathsf{GlobalCongestion} + mathsf{dilation})$ rounds for jobs that are s ufficiently efficient in terms of their memory. The $mathsf{dilation}$ is the maximum round complexity of any of the given jobs, and the $mathsf{GlobalCongestion}$ is the total number of messages in all jobs divided by the per-round bandwidth of $n^2$ of the $mathsf{CLIQUE}$ model. Both are inherent lower bounds for any scheduling algorithm. Then, we present a randomized scheduling algorithm which runs $t$ jobs in $O(mathsf{GlobalCongestion} + mathsf{dilation}cdotlog{n}+t)$ rounds and only requires that inputs and outputs do not exceed $O(nlog n)$ bits per node, which is met by, e.g., almost all graph problems. Lastly, we adjust the emph{random-delay-based} scheduling algorithm [Ghaffari, PODC15] from the $mathsf{CLIQUE}$ model and obtain an algorithm that schedules any $t$ jobs in $O(t / n + mathsf{LocalCongestion} + mathsf{dilation}cdotlog{n})$ rounds, where the $mathsf{LocalCongestion}$ relates to the congestion at a single node of the $mathsf{CLIQUE}$. We compare this algorithm to the previous approaches and show their benefit. We schedule the set of jobs on-the-fly, without a priori knowledge of its parameters or the communication patterns of the jobs. In light of the inherent lower bounds, all of our algorithms are nearly-optimal. We exemplify the power of our algorithms by analyzing the message complexity of the state-of-the-art MIS protocol [Ghaffari, Gouleakis, Konrad, Mitrovic and Rubinfeld, PODC18], and we show that we can solve $t$ instances of MIS in $O(t + loglogDeltalog{n})$ rounds, that is, in $O(1)$ amortized time, for $tgeq loglogDeltalog{n}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا