Do you want to publish a course? Click here

Caching with Time Windows and Delays

302   0   0.0 ( 0 )
 Added by Debmalya Panigrahi
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

We consider two generalizations of the classical weighted paging problem that incorporate the notion of delayed service of page requests. The first is the (weighted) Paging with Time Windows (PageTW) problem, which is like the classical weighted paging problem except that each page request only needs to be served before a given deadline. This problem arises in many practical applications of online caching, such as the deadline I/O scheduler in the Linux kernel and video-on-demand streaming. The second, and more general, problem is the (weighted) Paging with Delay (PageD) problem, where the delay in serving a page request results in a penalty being assessed to the objective. This problem generalizes the caching problem to allow delayed service, a line of work that has recently gained traction in online algorithms (e.g., Emek et al. STOC 16, Azar et al. STOC 17, Azar and Touitou FOCS 19). We give $O(log klog n)$-competitive algorithms for both the PageTW and PageD problems on $n$ pages with a cache of size $k$. This significantly improves on the previous best bounds of $O(k)$ for both problems (Azar et al. STOC 17). We also consider the offline PageTW and PageD problems, for which we give $O(1)$ approximation algorithms and prove APX-hardness. These are the first results for the offline problems; even NP-hardness was not known before our work. At the heart of our algorithms is a novel hitting-set LP relaxation of the PageTW problem that overcomes the $Omega(k)$ integrality gap of the natural LP for the problem. To the best of our knowledge, this is the first example of an LP-based algorithm for an online algorithm with delays/deadlines.

rate research

Read More

We study an online hypergraph matching problem with delays, motivated by ridesharing applications. In this model, users enter a marketplace sequentially, and are willing to wait up to $d$ timesteps to be matched, after which they will leave the system in favor of an outside option. A platform can match groups of up to $k$ users together, indicating that they will share a ride. Each group of users yields a match value depending on how compatible they are with one another. As an example, in ridesharing, $k$ is the capacity of the service vehicles, and $d$ is the amount of time a user is willing to wait for a driver to be matched to them. We present results for both the utility maximization and cost minimization variants of the problem. In the utility maximization setting, the optimal competitive ratio is $frac{1}{d}$ whenever $k geq 3$, and is achievable in polynomial-time for any fixed $k$. In the cost minimization variation, when $k = 2$, the optimal competitive ratio for deterministic algorithms is $frac{3}{2}$ and is achieved by a polynomial-time thresholding algorithm. When $k>2$, we show that a polynomial-time randomized batching algorithm is $(2 - frac{1}{d}) log k$-competitive, and it is NP-hard to achieve a competitive ratio better than $log k - O (log log k)$.
We consider the classic problem of scheduling jobs with precedence constraints on identical machines to minimize makespan, in the presence of communication delays. In this setting, denoted by $mathsf{P} mid mathsf{prec}, c mid C_{mathsf{max}}$, if two dependent jobs are scheduled on different machines, then at least $c$ units of time must pass between their executions. Despite its relevance to many applications, this model remains one of the most poorly understood in scheduling theory. Even for a special case where an unlimited number of machines is available, the best known approximation ratio is $2/3 cdot (c+1)$, whereas Grahams greedy list scheduling algorithm already gives a $(c+1)$-approximation in that setting. An outstanding open problem in the top-10 list by Schuurman and Woeginger and its recent update by Bansal asks whether there exists a constant-factor approximation algorithm. In this work we give a polynomial-time $O(log c cdot log m)$-approximation algorithm for this problem, where $m$ is the number of machines and $c$ is the communication delay. Our approach is based on a Sherali-Adams lift of a linear programming relaxation and a randomized clustering of the semimetric space induced by this lift.
Caches are a fundamental component of latency-sensitive computer systems. Recent work of [ASWB20] has initiated the study of delayed hits: a phenomenon in caches that occurs when the latency between the cache and backing store is much larger than the time between new requests. We present two results for the delayed hits caching model. (1) Competitive ratio lower bound. We prove that the competitive ratio of the algorithm in [ASWB20], and more generally of any deterministic online algorithm for delayed hits, is at least Omega(kZ), where k is the cache size and Z is the delay parameter. (2) Antimonotonicity of the delayed hits latency. Antimonotonicity is a naturally desirable property of cache latency: having a cache hit instead of a cache miss should result in lower overall latency. We prove that the latency of the delayed hits model is not antimonotone by exhibiting a scenario where having a cache hit instead of a miss results in an increase in overall latency. We additionally present a modification of the delayed hits model that makes the latency antimonotone.
126 - Darya Melnyk , Yuyi Wang , 2021
In this paper, we study $k$-Way Min-cost Perfect Matching with Delays - the $k$-MPMD problem. This problem considers a metric space with $n$ nodes. Requests arrive at these nodes in an online fashion. The task is to match these requests into sets of exactly $k$, such that the space and time cost of all matched requests are minimized. The notion of the space cost requires a definition of an underlying metric space that gives distances of subsets of $k$ elements. For $k>2$, the task of finding a suitable metric space is at the core of our problem: We show that for some known generalizations to $k=3$ points, such as the $2$-metric and the $D$-metric, there exists no competitive randomized algorithm for the $3$-MPMD problem. The $G$-metrics are defined for 3 points and allows for a competitive algorithm for the $3$-MPMD problem. For $k>3$ points, there exist two generalizations of the $G$-metrics known as $n$- and $K$-metrics. We show that neither the $n$-metrics nor the $K$-metrics can be used for the $k$-MPMD problem. On the positive side, we introduce the $H$-metrics, the first metrics to allow for a solution of the $k$-MPMD problem for all $k$. In order to devise an online algorithm for the $k$-MPMD problem on the $H$-metrics, we embed the $H$-metric into trees with an $O(log n)$ distortion. Based on this embedding result, we extend the algorithm proposed by Azar et al. (2017) and achieve a competitive ratio of $O(log n)$ for the $k$-MPMD problem.
We initiate the study of a natural and practically relevant new variant of online caching where the to-be-cached items can have dependencies. We assume that the universe is a tree T and items are tree nodes; we require that if a node v is cached then the whole subtree T(v) rooted at v is cached as well. This theoretical problem finds an immediate application in the context of forwarding table optimization in IP routing and software-defined networks. We present an elegant online deterministic algorithm TC for this problem, and rigorously prove that its competitive ratio is O(height(T) * k_ALG/(k_ALG-k_OPT+1)), where k_ALG and k_OPT denote the cache sizes of an online and the optimal offline algorithm, respectively. The result is optimal up to a factor of O(height(T)).
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا