No Arabic abstract
The design of online algorithms has tended to focus on algorithms with worst-case guarantees, e.g., bounds on the competitive ratio. However, it is well-known that such algorithms are often overly pessimistic, performing sub-optimally on non-worst-case inputs. In this paper, we develop an approach for data-driven design of online algorithms that maintain near-optimal worst-case guarantees while also performing learning in order to perform well for typical inputs. Our approach is to identify policy classes that admit global worst-case guarantees, and then perform learning using historical data within the policy classes. We demonstrate the approach in the context of two classical problems, online knapsack and online set cover, proving competitive bounds for rich policy classes in each case. Additionally, we illustrate the practical implications via a case study on electric vehicle charging.
We introduce and study a general version of the fractional online knapsack problem with multiple knapsacks, heterogeneous constraints on which items can be assigned to which knapsack, and rate-limiting constraints on the assignment of items to knapsacks. This problem generalizes variations of the knapsack problem and of the one-way trading problem that have previously been treated separately, and additionally finds application to the real-time control of electric vehicle (EV) charging. We introduce a new algorithm that achieves a competitive ratio within an additive factor of one of the best achievable competitive ratios for the general problem and matches or improves upon the best-known competitive ratio for special cases in the knapsack and one-way trading literatures. Moreover, our analysis provides a novel approach to online algorithm design based on an instance-dependent primal-dual analysis that connects the identification of worst-case instances to the design of algorithms. Finally, we illustrate the proposed algorithm via trace-based experiments of EV charging.
We consider the online Min-Sum Set Cover (MSSC), a natural and intriguing generalization of the classical list update problem. In Online MSSC, the algorithm maintains a permutation on $n$ elements based on subsets $S_1, S_2, ldots$ arriving online. The algorithm serves each set $S_t$ upon arrival, using its current permutation $pi_{t}$, incurring an access cost equal to the position of the first element of $S_t$ in $pi_{t}$. Then, the algorithm may update its permutation to $pi_{t+1}$, incurring a moving cost equal to the Kendall tau distance of $pi_{t}$ to $pi_{t+1}$. The objective is to minimize the total access and moving cost for serving the entire sequence. We consider the $r$-uniform version, where each $S_t$ has cardinality $r$. List update is the special case where $r = 1$. We obtain tight bounds on the competitive ratio of deterministic online algorithms for MSSC against a static adversary, that serves the entire sequence by a single permutation. First, we show a lower bound of $(r+1)(1-frac{r}{n+1})$ on the competitive ratio. Then, we consider several natural generalizations of successful list update algorithms and show that they fail to achieve any interesting competitive guarantee. On the positive side, we obtain a $O(r)$-competitive deterministic algorithm using ideas from online learning and the multiplicative weight updates (MWU) algorithm. Furthermore, we consider efficient algorithms. We propose a memoryless online algorithm, called Move-All-Equally, which is inspired by the Double Coverage algorithm for the $k$-server problem. We show that its competitive ratio is $Omega(r^2)$ and $2^{O(sqrt{log n cdot log r})}$, and conjecture that it is $f(r)$-competitive. We also compare Move-All-Equally against the dynamic optimal solution and obtain (almost) tight bounds by showing that it is $Omega(r sqrt{n})$ and $O(r^{3/2} sqrt{n})$-competitive.
We consider the problem of computing the rank of an m x n matrix A over a field. We present a randomized algorithm to find a set of r = rank(A) linearly independent columns in ~O(|A| + r^omega) field operations, where |A| denotes the number of nonzero entries in A and omega < 2.38 is the matrix multiplication exponent. Previously the best known algorithm to find a set of r linearly independent columns is by Gaussian elimination, with running time O(mnr^{omega-2}). Our algorithm is faster when r < max(m,n), for instance when the matrix is rectangular. We also consider the problem of computing the rank of a matrix dynamically, supporting the operations of rank one updates and additions and deletions of rows and columns. We present an algorithm that updates the rank in ~O(mn) field operations. We show that these algorithms can be used to obtain faster algorithms for various problems in numerical linear algebra, combinatorial optimization and dynamic data structure.
We consider the distributed version of the Multiple Knapsack Problem (MKP), where $m$ items are to be distributed amongst $n$ processors, each with a knapsack. We propose different distributed approximation algorithms with a tradeoff between time and message complexities. The algorithms are based on the greedy approach of assigning the best item to the knapsack with the largest capacity. These algorithms obtain a solution with a bound of $frac{1}{n+1}$ times the optimum solution, with either $mathcal{O}left(mlog nright)$ time and $mathcal{O}left(m nright)$ messages, or $mathcal{O}left(mright)$ time and $mathcal{O}left(mn^{2}right)$ messages.
We introduce and study a discrete multi-period extension of the classical knapsack problem, dubbed generalized incremental knapsack. In this setting, we are given a set of $n$ items, each associated with a non-negative weight, and $T$ time periods with non-decreasing capacities $W_1 leq dots leq W_T$. When item $i$ is inserted at time $t$, we gain a profit of $p_{it}$; however, this item remains in the knapsack for all subsequent periods. The goal is to decide if and when to insert each item, subject to the time-dependent capacity constraints, with the objective of maximizing our total profit. Interestingly, this setting subsumes as special cases a number of recently-studied incremental knapsack problems, all known to be strongly NP-hard. Our first contribution comes in the form of a polynomial-time $(frac{1}{2}-epsilon)$-approximation for the generalized incremental knapsack problem. This result is based on a reformulation as a single-machine sequencing problem, which is addressed by blending dynamic programming techniques and the classical Shmoys-Tardos algorithm for the generalized assignment problem. Combined with further enumeration-based self-reinforcing ideas and newly-revealed structural properties of nearly-optimal solutions, we turn our basic algorithm into a quasi-polynomial time approximation scheme (QPTAS). Hence, under widely believed complexity assumptions, this finding rules out the possibility that generalized incremental knapsack is APX-hard.