Do you want to publish a course? Click here

Entropy, Convex Optimization, and Competitive Quantum Interactions

47   0   0.0 ( 0 )
 Added by Gus Gutoski
 Publication date 2005
and research's language is English
 Authors Gus Gutoski




Ask ChatGPT about the research

This paper has been withdrawn by the author due to errors.



rate research

Read More

This paper investigates the problem of computing the equilibrium of competitive games, which is often modeled as a constrained saddle-point optimization problem with probability simplex constraints. Despite recent efforts in understanding the last-iterate convergence of extragradient methods in the unconstrained setting, the theoretical underpinnings of these methods in the constrained settings, especially those using multiplicative updates, remain highly inadequate, even when the objective function is bilinear. Motivated by the algorithmic role of entropy regularization in single-agent reinforcement learning and game theory, we develop provably efficient extragradient methods to find the quantal response equilibrium (QRE) -- which are solutions to zero-sum two-player matrix games with entropy regularization -- at a linear rate. The proposed algorithms can be implemented in a decentralized manner, where each player executes symmetric and multiplicative updates iteratively using its own payoff without observing the opponents actions directly. In addition, by controlling the knob of entropy regularization, the proposed algorithms can locate an approximate Nash equilibrium of the unregularized matrix game at a sublinear rate without assuming the Nash equilibrium to be unique. Our methods also lead to efficient policy extragradient algorithms for solving entropy-regularized zero-sum Markov games at a linear rate. All of our convergence rates are nearly dimension-free, which are independent of the size of the state and action spaces up to logarithm factors, highlighting the positive role of entropy regularization for accelerating convergence.
In recent years, constrained optimization has become increasingly relevant to the machine learning community, with applications including Neyman-Pearson classification, robust optimization, and fair machine learning. A natural approach to constrained optimization is to optimize the Lagrangian, but this is not guaranteed to work in the non-convex setting, and, if using a first-order method, cannot cope with non-differentiable constraints (e.g. constraints on rates or proportions). The Lagrangian can be interpreted as a two-player game played between a player who seeks to optimize over the model parameters, and a player who wishes to maximize over the Lagrange multipliers. We propose a non-zero-sum variant of the Lagrangian formulation that can cope with non-differentiable--even discontinuous--constraints, which we call the proxy-Lagrangian. The first player minimizes external regret in terms of easy-to-optimize proxy constraints, while the second player enforces the original constraints by minimizing swap regret. For this new formulation, as for the Lagrangian in the non-convex setting, the result is a stochastic classifier. For both the proxy-Lagrangian and Lagrangian formulations, however, we prove that this classifier, instead of having unbounded size, can be taken to be a distribution over no more than m+1 models (where m is the number of constraints). This is a significant improvement in practical terms.
We study the first-order convex optimization problem, where we have black-box access to a (not necessarily smooth) function $f:mathbb{R}^n to mathbb{R}$ and its (sub)gradient. Our goal is to find an $epsilon$-approximate minimum of $f$ starting from a point that is distance at most $R$ from the true minimum. If $f$ is $G$-Lipschitz, then the classic gradient descent algorithm solves this problem with $O((GR/epsilon)^{2})$ queries. Importantly, the number of queries is independent of the dimension $n$ and gradient descent is optimal in this regard: No deterministic or randomized algorithm can achieve better complexity that is still independent of the dimension $n$. In this paper we reprove the randomized lower bound of $Omega((GR/epsilon)^{2})$ using a simpler argument than previous lower bounds. We then show that although the function family used in the lower bound is hard for randomized algorithms, it can be solved using $O(GR/epsilon)$ quantum queries. We then show an improved lower bound against quantum algorithms using a different set of instances and establish our main result that in general even quantum algorithms need $Omega((GR/epsilon)^2)$ queries to solve the problem. Hence there is no quantum speedup over gradient descent for black-box first-order convex optimization without further assumptions on the function family.
We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using $tilde{O}(1)$ quantum queries to a membership oracle, which is an exponential quantum speed-up over the $Omega(n)$ membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that $tilde{O}(n)$ quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: $Omega(sqrt{n})$ quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and $Omega(n)$ quantum separation queries are needed if it does not.
Given a Boolean function $f:{-1,1}^nto {-1,1}$, the Fourier distribution assigns probability $widehat{f}(S)^2$ to $Ssubseteq [n]$. The Fourier Entropy-Influence (FEI) conjecture of Friedgut and Kalai asks if there exist a universal constant C>0 such that $H(hat{f}^2)leq C Inf(f)$, where $H(hat{f}^2)$ is the Shannon entropy of the Fourier distribution of $f$ and $Inf(f)$ is the total influence of $f$. 1) We consider the weaker Fourier Min-entropy-Influence (FMEI) conjecture. This asks if $H_{infty}(hat{f}^2)leq C Inf(f)$, where $H_{infty}(hat{f}^2)$ is the min-entropy of the Fourier distribution. We show $H_{infty}(hat{f}^2)leq 2C_{min}^oplus(f)$, where $C_{min}^oplus(f)$ is the minimum parity certificate complexity of $f$. We also show that for every $epsilongeq 0$, we have $H_{infty}(hat{f}^2)leq 2log (|hat{f}|_{1,epsilon}/(1-epsilon))$, where $|hat{f}|_{1,epsilon}$ is the approximate spectral norm of $f$. As a corollary, we verify the FMEI conjecture for the class of read-$k$ $DNF$s (for constant $k$). 2) We show that $H(hat{f}^2)leq 2 aUC^oplus(f)$, where $aUC^oplus(f)$ is the average unambiguous parity certificate complexity of $f$. This improves upon Chakraborty et al. An important consequence of the FEI conjecture is the long-standing Mansours conjecture. We show that a weaker version of FEI already implies Mansours conjecture: is $H(hat{f}^2)leq C min{C^0(f),C^1(f)}$?, where $C^0(f), C^1(f)$ are the 0- and 1-certificate complexities of $f$, respectively. 3) We study what FEI implies about the structure of polynomials that 1/3-approximate a Boolean function. We pose a conjecture (which is implied by FEI): no flat degree-$d$ polynomial of sparsity $2^{omega(d)}$ can 1/3-approximate a Boolean function. We prove this conjecture unconditionally for a particular class of polynomials.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا