Near-Optimal Bounds for Online Caching with Machine Learned Advice


الملخص بالإنكليزية

In the model of online caching with machine learned advice, introduced by Lykouris and Vassilvitskii, the goal is to solve the caching problem with an online algorithm that has access to next-arrival predictions: when each input element arrives, the algorithm is given a prediction of the next time when the element will reappear. The traditional model for online caching suffers from an $Omega(log k)$ competitive ratio lower bound (on a cache of size $k$). In contrast, the augmented model admits algorithms which beat this lower bound when the predictions have low error, and asymptotically match the lower bound when the predictions have high error, even if the algorithms are oblivious to the prediction error. In particular, Lykouris and Vassilvitskii showed that there is a prediction-augmented caching algorithm with a competitive ratio of $O(1+min(sqrt{eta/OPT}, log k))$ when the overall $ell_1$ prediction error is bounded by $eta$, and $OPT$ is the cost of the optimal offline algorithm. The dependence on $k$ in the competitive ratio is optimal, but the dependence on $eta/OPT$ may be far from optimal. In this work, we make progress towards closing this gap. Our contributions are twofold. First, we provide an improved algorithm with a competitive ratio of $O(1 + min((eta/OPT)/k, 1) log k)$. Second, we provide a lower bound of $Omega(log min((eta/OPT)/(k log k), k))$.

تحميل البحث