ترغب بنشر مسار تعليمي؟ اضغط هنا

A max-cut formulation of 0/1 programs

74   0   0.0 ( 0 )
 نشر من قبل Jean Lasserre
 تاريخ النشر 2015
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We show that the linear or quadratic 0/1 program[P:quadmin{ c^Tx+x^TFx : :A,x =b;:xin{0,1}^n},]can be formulated as a MAX-CUT problem whose associated graph is simply related to the matrices $F$ and $A^TA$.Hence the whole arsenal of approximation techniques for MAX-CUT can be applied. We also compare the lower boundof the resulting semidefinite (or Shor) relaxation with that of the standard LP-relaxation and the first semidefinite relaxationsassociated with the Lasserre hierarchy and the copositive formulations of $P$.

قيم البحث

اقرأ أيضاً

We study the applicability of distributed, local algorithms to 0/1 max-min LPs where the objective is to maximise ${min_k sum_v c_{kv} x_v}$ subject to ${sum_v a_{iv} x_v le 1}$ for each $i$ and ${x_v ge 0}$ for each $v$. Here $c_{kv} in {0,1}$, $a_{ iv} in {0,1}$, and the support sets ${V_i = {v : a_{iv} > 0 }}$ and ${V_k = {v : c_{kv}>0 }}$ have bounded size; in particular, we study the case $|V_k| le 2$. Each agent $v$ is responsible for choosing the value of $x_v$ based on information within its constant-size neighbourhood; the communication network is the hypergraph where the sets $V_k$ and $V_i$ constitute the hyperedges. We present a local approximation algorithm which achieves an approximation ratio arbitrarily close to the theoretical lower bound presented in prior work.
We consider the Max-Cut problem. Let $G = (V,E)$ be a graph with adjacency matrix $(a_{ij})_{i,j=1}^{n}$. Burer, Monteiro & Zhang proposed to find, for $n$ angles $left{theta_1, theta_2, dots, theta_nright} subset [0, 2pi]$, minima of the energy $$ f (theta_1, dots, theta_n) = sum_{i,j=1}^{n} a_{ij} cos{(theta_i - theta_j)}$$ because configurations achieving a global minimum leads to a partition of size 0.878 Max-Cut(G). This approach is known to be computationally viable and leads to very good results in practice. We prove that by replacing $cos{(theta_i - theta_j)}$ with an explicit function $g_{varepsilon}(theta_i - theta_j)$ global minima of this new functional lead to a $(1-varepsilon)$Max-Cut(G). This suggests some interesting algorithms that perform well. It also shows that the problem of finding approximate global minima of energy functionals of this type is NP-hard in general.
The max-cut problem is a classical graph theory problem which is NP-complete. The best polynomial time approximation scheme relies on emph{semidefinite programming} (SDP). We study the conditions under which graphs of certain classes have rank~1 solu tions to the max-cut SDP. We apply these findings to look at how solutions to the max-cut SDP behave under simple combinatorial constructions. Our results determine when solutions to the max-cut SDP for cycle graphs are rank~1. We find the solutions to the max-cut SDP of the vertex~sum of two graphs. We then characterize the SDP solutions upon joining two triangle graphs by an edge~sum.
Many modern statistical estimation problems are defined by three major components: a statistical model that postulates the dependence of an output variable on the input features; a loss function measuring the error between the observed output and the model predicted output; and a regularizer that controls the overfitting and/or variable selection in the model. We study the sampling version of this generic statistical estimation problem where the model parameters are estimated by empirical risk minimization, which involves the minimization of the empirical average of the loss function at the data points weighted by the model regularizer. In our setup we allow all three component functions discussed above to be of the difference-of-convex (dc) type and illustrate them with a host of commonly used examples, including those in continuous piecewise affine regression and in deep learning (where the activation functions are piecewise affine). We describe a nonmonotone majorization-minimization (MM) algorithm for solving the unified nonconvex, nondifferentiable optimization problem which is formulated as a specially structured composite dc program of the pointwise max type, and present convergence results to a directional stationary solution. An efficient semismooth Newton method is proposed to solve the dual of the MM subproblems. Numerical results are presented to demonstrate the effectiveness of the proposed algorithm and the superiority of continuous piecewise affine regression over the standard linear model.
In this work, we initiate the study of fault tolerant Max Cut, where given an edge-weighted undirected graph $G=(V,E)$, the goal is to find a cut $Ssubseteq V$ that maximizes the total weight of edges that cross $S$ even after an adversary removes $k $ vertices from $G$. We consider two types of adversaries: an adaptive adversary that sees the outcome of the random coin tosses used by the algorithm, and an oblivious adversary that does not. For any constant number of failures $k$ we present an approximation of $(0.878-epsilon)$ against an adaptive adversary and of $alpha_{GW}approx 0.8786$ against an oblivious adversary (here $alpha_{GW}$ is the approximation achieved by the random hyperplane algorithm of [Goemans-Williamson J. ACM `95]). Additionally, we present a hardness of approximation of $alpha_{GW}$ against both types of adversaries, rendering our results (virtually) tight. The non-linear nature of the fault tolerant objective makes the design and analysis of algorithms harder when compared to the classic Max Cut. Hence, we employ approaches ranging from multi-objective optimization to LP duality and the ellipsoid algorithm to obtain our results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا