No Arabic abstract
The Potts model has many applications. It is equivalent to some min-cut and max-flow models. Primal-dual algorithms have been used to solve these problems. Due to the special structure of the models, convergence proof is still a difficult problem. In this work, we developed two novel, preconditioned, and over-relaxed alternating direction methods of multipliers (ADMM) with convergence guarantee for these models. Using the proposed preconditioners or block preconditioners, we get accelerations with the over-relaxation variants of preconditioned ADMM. The preconditioned and over-relaxed Douglas-Rachford splitting methods are also considered for the Potts model. Our framework can handle both the two-labeling or multi-labeling problems with appropriate block preconditioners based on Eckstein-Bertsekas and Fortin-Glowinski splitting techniques.
The classical max-flow min-cut theorem describes transport through certain idealized classical networks. We consider the quantum analog for tensor networks. By associating an integral capacity to each edge and a tensor to each vertex in a flow network, we can also interpret it as a tensor network, and more specifically, as a linear map from the input space to the output space. The quantum max flow is defined to be the maximal rank of this linear map over all choices of tensors. The quantum min cut is defined to be the minimum product of the capacities of edges over all cuts of the tensor network. We show that unlike the classical case, the quantum max-flow=min-cut conjecture is not true in general. Under certain conditions, e.g., when the capacity on each edge is some power of a fixed integer, the quantum max-flow is proved to equal the quantum min-cut. However, concrete examples are also provided where the equality does not hold. We also found connections of quantum max-flow/min-cut with entropy of entanglement and the quantum satisfiability problem. We speculate that the phenomena revealed may be of interest both in spin systems in condensed matter and in quantum gravity.
We study the ridge method for min-max problems, and investigate its convergence without any convexity, differentiability or qualification assumption. The central issue is to determine whether the parametric optimality formula provides a conservative field, a notion of generalized derivative well suited for optimization. The answer to this question is positive in a semi-algebraic, and more generally definable, context. The proof involves a new characterization of definable conservative fields which is of independent interest. As a consequence, the ridge method applied to definable objectives is proved to have a minimizing behavior and to converge to a set of equilibria which satisfy an optimality condition. Definability is key to our proof: we show that for a more general class of nonsmooth functions, conservativity of the parametric optimality formula may fail, resulting in an absurd behavior of the ridge method.
In this note we discuss the geometry of matrix product states with periodic boundary conditions and provide three infinite sequences of examples where the quantum max-flow is strictly less than the quantum min-cut. In the first we fix the underlying graph to be a 4-cycle and verify a prediction of Hastings that inequality occurs for infinitely many bond dimensions. In the second we generalize this result to a 2d-cycle. In the third we show that the 2d-cycle with periodic boundary conditions gives inequality for all d when all bond dimensions equal two, namely a gap of at least 2^{d-2} between the quantum max-flow and the quantum min-cut.
We consider a max-min variation of the classical problem of maximizing a linear function over the base of a polymatroid. In our problem we assume that the vector of coefficients of the linear function is not a known parameter of the problem but is some vertex of a simplex, and we maximize the linear function in the worst case. Equivalently, we view the problem as a zero-sum game between a maximizing player whose mixed strategy set is the base of the polymatroid and a minimizing player whose mixed strategy set is a simplex. We show how to efficiently obtain optimal strategies for both players and an expression for the value of the game. Furthermore, we give a characterization of the set of optimal strategies for the minimizing player. We consider fou
Recent applications in machine learning have renewed the interest of the community in min-max optimization problems. While gradient-based optimization methods are widely used to solve such problems, there are however many scenarios where these techniques are not well-suited, or even not applicable when the gradient is not accessible. We investigate the use of direct-search methods that belong to a class of derivative-free techniques that only access the objective function through an oracle. In this work, we design a novel algorithm in the context of min-max saddle point games where one sequentially updates the min and the max player. We prove convergence of this algorithm under mild assumptions, where the objective of the max-player satisfies the Polyak-L{}ojasiewicz (PL) condition, while the min-player is characterized by a nonconvex objective. Our method only assumes dynamically adjusted accurate estimates of the oracle with a fixed probability. To the best of our knowledge, our analysis is the first one to address the convergence of a direct-search method for min-max objectives in a stochastic setting.