No Arabic abstract
For iid $d$-dimensional observations $X^{(1)}, X^{(2)}, ldots$ with independent Exponential$(1)$ coordinates, consider the boundary (relative to the closed positive orthant), or frontier, $F_n$ of the closed Pareto record-setting (RS) region [ mbox{RS}_n := {0 leq x in {mathbb R}^d: x otprec X^{(i)} mbox{for all $1 leq i leq n$}} ] at time $n$, where $0 leq x$ means that $0 leq x_j$ for $1 leq j leq d$ and $x prec y$ means that $x_j < y_j$ for $1 leq j leq d$. With $x_+ := sum_{j = 1}^d x_j$, let [ F_n^- := min{x_+: x in F_n} quad mbox{and} quad F_n^+ := max{x_+: x in F_n}, ] and define the width of $F_n$ as [ W_n := F_n^+ - F_n^-. ] We describe typical and almost sure behavior of the processes $F^+$, $F^-$, and $W$. In particular, we show that $F^+_n sim ln n sim F^-_n$ almost surely and that $W_n / ln ln n$ converges in probability to $d - 1$; and for $d geq 2$ we show that, almost surely, the set of limit points of the sequence $W_n / ln ln n$ is the interval $[d - 1, d]$. We also obtain modifications of our results that are important in connection with efficient simulation of Pareto records. Let $T_m$ denote the time that the $m$th record is set. We show that $F^+_{T_m} sim (d! m)^{1/d} sim F^-_{T_m}$ almost surely and that $W_{T_m} / ln m$ converges in probability to $1 - d^{-1}$; and for $d geq 2$ we show that, almost surely, the sequence $W_{T_m} / ln m$ has $liminf$ equal to $1 - d^{-1}$ and $limsup$ equal to $1$.
We present, (partially) analyze, and apply an efficient algorithm for the simulation of multivariate Pareto records. A key role is played by minima of the record-setting region (we call these generators) each time a new record is generated, and two highlights of our work are (i) efficient dynamic maintenance of the set of generators and (ii) asymptotic analysis of the expected number of generators at each time.
We study the trade-off between the Price of Anarchy (PoA) and the Price of Stability (PoS) in mechanism design, in the prototypical problem of unrelated machine scheduling. We give bounds on the space of feasible mechanisms with respect to the above metrics, and observe that two fundamental mechanisms, namely the First-Price (FP) and the Second-Price (SP), lie on the two opposite extrema of this boundary. Furthermore, for the natural class of anonymous task-independent mechanisms, we completely characterize the PoA/PoS Pareto frontier; we design a class of optimal mechanisms $mathcal{SP}_alpha$ that lie exactly on this frontier. In particular, these mechanisms range smoothly, with respect to parameter $alphageq 1$ across the frontier, between the First-Price ($mathcal{SP}_1$) and Second-Price ($mathcal{SP}_infty$) mechanisms. En route to these results, we also provide a definitive answer to an important question related to the scheduling problem, namely whether non-truthful mechanisms can provide better makespan guarantees in the equilibrium, compared to truthful ones. We answer this question in the negative, by proving that the Price of Anarchy of all scheduling mechanisms is at least $n$, where $n$ is the number of machines.
This paper studies an entropy-based multi-objective Bayesian optimization (MBO). The entropy search is successful approach to Bayesian optimization. However, for MBO, existing entropy-based methods ignore trade-off among objectives or introduce unreliable approximations. We propose a novel entropy-based MBO called Pareto-frontier entropy search (PFES) by considering the entropy of Pareto-frontier, which is an essential notion of the optimality of the multi-objective problem. Our entropy can incorporate the trade-off relation of the optimal values, and further, we derive an analytical formula without introducing additional approximations or simplifications to the standard entropy search setting. We also show that our entropy computation is practically feasible by using a recursive decomposition technique which has been known in studies of the Pareto hyper-volume computation. Besides the usual MBO setting, in which all the objectives are simultaneously observed, we also consider the decoupled setting, in which the objective functions can be observed separately. PFES can easily adapt to the decoupled setting by considering the entropy of the marginal density for each output dimension. This approach incorporates dependency among objectives conditioned on Pareto-frontier, which is ignored by the existing method. Our numerical experiments show effectiveness of PFES through several benchmark datasets.
Designing feasible and effective architectures under diverse computation budgets incurred by different applications/devices is essential for deploying deep models in practice. Existing methods often perform an independent architecture search for each target budget, which is very inefficient yet unnecessary. Moreover, the repeated independent search manner would inevitably ignore the common knowledge among different search processes and hamper the search performance. To address these issues, we seek to train a general architecture generator that automatically produces effective architectures for an arbitrary budget merely via model inference. To this end, we propose a Pareto-Frontier-aware Neural Architecture Generator (NAG) which takes an arbitrary budget as input and produces the Pareto optimal architecture for the target budget. We train NAG by learning the Pareto frontier (i.e., the set of Pareto optimal architectures) over model performance and computational cost (e.g., latency). Extensive experiments on three platforms (i.e., mobile, CPU, and GPU) show the superiority of the proposed method over existing NAS methods.
This paper leverages machine-learned predictions to design competitive algorithms for online conversion problems with the goal of improving the competitive ratio when predictions are accurate (i.e., consistency), while also guaranteeing a worst-case competitive ratio regardless of the prediction quality (i.e., robustness). We unify the algorithmic design of both integral and fractional conversion problems, which are also known as the 1-max-search and one-way trading problems, into a class of online threshold-based algorithms (OTA). By incorporating predictions into design of OTA, we achieve the Pareto-optimal trade-off of consistency and robustness, i.e., no online algorithm can achieve a better consistency guarantee given for a robustness guarantee. We demonstrate the performance of OTA using numerical experiments on Bitcoin conversion.