With similarity-based content delivery, the request for a content can be satisfied by delivering a related content under a dissimilarity cost. This letter addresses the joint optimization of caching and similarity-based delivery decisions across a network so as to minimize the weighted sum of average delay and dissimilarity cost. A convergent alternate gradient descent ascent algorithm is first introduced for an offline scenario with prior knowledge of the request rates, and then extended to an online setting. Numerical results validate the advantages of the approach with respect to standard per-cache solutions.
We consider the online stochastic matching problem proposed by Feldman et al. [FMMM09] as a model of display ad allocation. We are given a bipartite graph; one side of the graph corresponds to a fixed set of bins and the other side represents the set of possible ball types. At each time step, a ball is sampled independently from the given distribution and it needs to be matched upon its arrival to an empty bin. The goal is to maximize the number of allocations. We present an online algorithm for this problem with a competitive ratio of 0.702. Before our result, algorithms with a competitive ratio better than $1-1/e$ were known under the assumption that the expected number of arriving balls of each type is integral. A key idea of the algorithm is to collect statistics about the decisions of the optimum offline solution using Monte Carlo sampling and use those statistics to guide the decisions of the online algorithm. We also show that our algorithm achieves a competitive ratio of 0.705 when the rates are integral. On the hardness side, we prove that no online algorithm can have a competitive ratio better than 0.823 under the known distribution model (and henceforth under the permutation model). This improves upon the 5/6 hardness result proved by Goel and Mehta cite{GM08} for the permutation model.
The paper describes an online deep learning algorithm for the adaptive modulation and coding in 5G Massive MIMO. The algorithm is based on a fully connected neural network, which is initially trained on the output of the traditional algorithm and then is incrementally retrained by the service feedback of its output. We show the advantage of our solution over the state-of-the-art Q-Learning approach. We provide system-level simulation results to support this conclusion in various scenarios with different channel characteristics and different user speeds. Compared with traditional OLLA our algorithm shows 10% to 20% improvement of user throughput in full buffer case.
We initiate the study of a natural and practically relevant new variant of online caching where the to-be-cached items can have dependencies. We assume that the universe is a tree T and items are tree nodes; we require that if a node v is cached then the whole subtree T(v) rooted at v is cached as well. This theoretical problem finds an immediate application in the context of forwarding table optimization in IP routing and software-defined networks. We present an elegant online deterministic algorithm TC for this problem, and rigorously prove that its competitive ratio is O(height(T) * k_ALG/(k_ALG-k_OPT+1)), where k_ALG and k_OPT denote the cache sizes of an online and the optimal offline algorithm, respectively. The result is optimal up to a factor of O(height(T)).
To enhance the quality and speed of data processing and protect the privacy and security of the data, edge computing has been extensively applied to support data-intensive intelligent processing services at edge. Among these data-intensive services, ensemble learning-based services can in natural leverage the distributed computation and storage resources at edge devices to achieve efficient data collection, processing, analysis. Collaborative caching has been applied in edge computing to support services close to the data source, in order to take the limited resources at edge devices to support high-performance ensemble learning solutions. To achieve this goal, we propose an adaptive in-network collaborative caching scheme for ensemble learning at edge. First, an efficient data representation structure is proposed to record cached data among different nodes. In addition, we design a collaboration scheme to facilitate edge nodes to cache valuable data for local ensemble learning, by scheduling local caching according to a summarization of data representations from different edge nodes. Our extensive simulations demonstrate the high performance of the proposed collaborative caching scheme, which significantly reduces the learning latency and the transmission overhead.
In this work we analyze traces of mobility and co-location among a group of nearly 1000 closely interacting individuals. We attempt to reconstruct the Facebook friendship graph, Facebook interaction network, as well as call and SMS networks from longitudinal records of person-to-person offline proximity. We find subtle, yet observable behavioral differences between pairs of people who communicate using each of the different channels and we show that the signal of friendship is strong enough to stand out from the noise of random and schedule-driven offline interactions between familiar strangers. Our study also provides an overview of methods for link inference based on offline behavior and proposes new features to improve the performance of the prediction task.