Do you want to publish a course? Click here

Synergy via Redundancy: Adaptive Replication Strategies and Fundamental Limits

158   0   0.0 ( 0 )
 Added by Gauri Joshi
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The maximum possible throughput (or the rate of job completion) of a multi-server system is typically the sum of the service rates of individual servers. Recent work shows that launching multiple replicas of a job and canceling them as soon as one copy finishes can boost the throughput, especially when the service time distribution has high variability. This means that redundancy can, in fact, create synergy among servers such that their overall throughput is greater than the sum of individual servers. This work seeks to find the fundamental limit of the throughput boost achieved by job replication and the optimal replication policy to achieve it. While most previous works consider upfront replication policies, we expand the set of possible policies to delayed launch of replicas. The search for the optimal adaptive replication policy can be formulated as a Markov Decision Process, using which we propose two myopic replication policies, MaxRate and AdaRep, to adaptively replicate jobs. In order to quantify the optimality gap of these and other policies, we derive upper bounds on the service capacity, which provide fundamental limits on the throughput of queueing systems with redundancy.

rate research

Read More

We investigate the limits of thermometry using quantum probes at thermal equilibrium within the Bayesian approach. We consider the possibility of engineering interactions between the probes in order to enhance their sensitivity, as well as feedback during the measurement process, i.e., adaptive protocols. On the one hand, we obtain an ultimate bound on thermometry precision in the Bayesian setting, valid for arbitrary interactions and measurement schemes, which lower bounds the error with a quadratic (Heisenberg-like) scaling with the number of probes. We develop a simple adaptive strategy that can saturate this limit. On the other hand, we derive a no-go theorem for non-adaptive protocols that does not allow for better than linear (shot-noise-like) scaling even if one has unlimited control over the probes, namely access to arbitrary many-body interactions.
197 - Gauri Joshi , Emina Soljanin , 2015
In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers, and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps understand when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks, and time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e. log of the tail probability is convex) then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e. log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.
We consider the problem of decomposing the total mutual information conveyed by a pair of predictor random variables about a target random variable into redundant, unique and synergistic contributions. We focus on the relationship between redundant information and the more familiar information-theoretic notions of common information. Our main contribution is an impossibility result. We show that for independent predictor random variables, any common information based measure of redundancy cannot induce a nonnegative decomposition of the total mutual information. Interestingly, this entails that any reasonable measure of redundant information cannot be derived by optimization over a single random variable.
In a system of three stochastic variables, the Partial Information Decomposition (PID) of Williams and Beer dissects the information that two variables (sources) carry about a third variable (target) into nonnegative information atoms that describe redundant, unique, and synergistic modes of dependencies among the variables. However, the classification of the three variables into two sources and one target limits the dependency modes that can be quantitatively resolved, and does not naturally suit all systems. Here, we extend the PID to describe trivariate modes of dependencies in full generality, without introducing additional decomposition axioms or making assumptions about the target/source nature of the variables. By comparing different PID lattices of the same system, we unveil a finer PID structure made of seven nonnegative information subatoms that are invariant to different target/source classifications and that are sufficient to construct any PID lattice. This finer structure naturally splits redundant information into two nonnegative components: the source redundancy, which arises from the pairwise correlations between the source variables, and the non-source redundancy, which does not, and relates to the synergistic information the sources carry about the target. The invariant structure is also sufficient to construct the systems entropy, hence it characterizes completely all the interdependencies in the system.
In phase retrieval we want to recover an unknown signal $boldsymbol xinmathbb C^d$ from $n$ quadratic measurements of the form $y_i = |langle{boldsymbol a}_i,{boldsymbol x}rangle|^2+w_i$ where $boldsymbol a_iin mathbb C^d$ are known sensing vectors and $w_i$ is measurement noise. We ask the following weak recovery question: what is the minimum number of measurements $n$ needed to produce an estimator $hat{boldsymbol x}(boldsymbol y)$ that is positively correlated with the signal $boldsymbol x$? We consider the case of Gaussian vectors $boldsymbol a_i$. We prove that - in the high-dimensional limit - a sharp phase transition takes place, and we locate the threshold in the regime of vanishingly small noise. For $nle d-o(d)$ no estimator can do significantly better than random and achieve a strictly positive correlation. For $nge d+o(d)$ a simple spectral estimator achieves a positive correlation. Surprisingly, numerical simulations with the same spectral estimator demonstrate promising performance with realistic sensing matrices. Spectral methods are used to initialize non-convex optimization algorithms in phase retrieval, and our approach can boost the performance in this setting as well. Our impossibility result is based on classical information-theory arguments. The spectral algorithm computes the leading eigenvector of a weighted empirical covariance matrix. We obtain a sharp characterization of the spectral properties of this random matrix using tools from free probability and generalizing a recent result by Lu and Li. Both the upper and lower bound generalize beyond phase retrieval to measurements $y_i$ produced according to a generalized linear model. As a byproduct of our analysis, we compare the threshold of the proposed spectral method with that of a message passing algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا