Do you want to publish a course? Click here

The SDP value of random 2CSPs

89   0   0.0 ( 0 )
 Added by Tselil Schramm
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We consider a very wide class of models for sparse random Boolean 2CSPs; equivalently, degree-2 optimization problems over~${pm 1}^n$. For each model $mathcal{M}$, we identify the high-probability value~$s^*_{mathcal{M}}$ of the natural SDP relaxation (equivalently, the quantum value). That is, for all $varepsilon > 0$ we show that the SDP optimum of a random $n$-variable instance is (when normalized by~$n$) in the range $(s^*_{mathcal{M}}-varepsilon, s^*_{mathcal{M}}+varepsilon)$ with high probability. Our class of models includes non-regular CSPs, and ones where the SDP relaxation value is strictly smaller than the spectral relaxation value.



rate research

Read More

We precisely determine the SDP value (equivalently, quantum value) of large random instances of certain kinds of constraint satisfaction problems, ``two-eigenvalue 2CSPs. We show this SDP value coincides with the spectral relaxation value, possibly indicating a computational threshold. Our analysis extends the previously resolved cases of random regular $mathsf{2XOR}$ and $textsf{NAE-3SAT}$, and includes new cases such as random $mathsf{Sort}_4$ (equivalently, $mathsf{CHSH}$) and $mathsf{Forrelation}$ CSPs. Our techniques include new generalizations of the nonbacktracking operator, the Ihara--Bass Formula, and the Friedman/Bordenave proof of Alons Conjecture.
Unlike its cousin 3SAT, the NAE-3SAT (not-all-equal-3SAT) problem has the property that spectral/SDP algorithms can efficiently refute random instances when the constraint density is a large constant (with high probability). But do these methods work immediately above the satisfiability threshold, or is there still a range of constraint densities for which random NAE-3SAT instances are unsatisfiable but hard to refute? We show that the latter situation prevails, at least in the context of random regular instances and SDP-based refutation. More precisely, whereas a random $d$-regular instance of NAE-3SAT is easily shown to be unsatisfiable (whp) once $d geq 8$, we establish the following sharp threshold result regarding efficient refutation: If $d < 13.5$ then the basic SDP, even augmented with triangle inequalities, fails to refute satisfiability (whp), if $d > 13.5$ then even the most basic spectral algorithm refutes satisfiability~(whp).
135 - Edouard Bonnet 2020
We show, assuming the Strong Exponential Time Hypothesis, that for every $varepsilon > 0$, approximating directed Diameter on $m$-arc graphs within ratio $7/4 - varepsilon$ requires $m^{4/3 - o(1)}$ time. Our construction uses nonnegative edge weights but even holds for sparse digraphs, i.e., for which the number of vertices $n$ and the number of arcs $m$ satisfy $m = n log^{O(1)} n$. This is the first result that conditionally rules out a near-linear time $5/3$-approximation for Diameter.
We consider the problem of scattering $n$ robots in a two dimensional continuous space. As this problem is impossible to solve in a deterministic manner, all solutions must be probabilistic. We investigate the amount of randomness (that is, the number of random bits used by the robots) that is required to achieve scattering. We first prove that $n log n$ random bits are necessary to scatter $n$ robots in any setting. Also, we give a sufficient condition for a scattering algorithm to be random bit optimal. As it turns out that previous solutions for scattering satisfy our condition, they are hence proved random bit optimal for the scattering problem. Then, we investigate the time complexity of scattering when strong multiplicity detection is not available. We prove that such algorithms cannot converge in constant time in the general case and in $o(log log n)$ rounds for random bits optimal scattering algorithms. However, we present a family of scattering algorithms that converge as fast as needed without using multiplicity detection. Also, we put forward a specific protocol of this family that is random bit optimal ($n log n$ random bits are used) and time optimal ($log log n$ rounds are used). This improves the time complexity of previous results in the same setting by a $log n$ factor. Aside from characterizing the random bit complexity of mobile robot scattering, our study also closes its time complexity gap with and without strong multiplicity detection (that is, $O(1)$ time complexity is only achievable when strong multiplicity detection is available, and it is possible to approach it as needed otherwise).
The problem of finding a common refinement of a set of rooted trees with common leaf set $L$ appears naturally in mathematical phylogenetics whenever poorly resolved information on the same taxa from different sources is to be reconciled. This constitutes a special case of the well-studied supertree problem, where the leaf sets of the input trees may differ. Algorithms that solve the rooted tree compatibility problem are of course applicable to this special case. However, they require sophisticated auxiliary data structures and have a running time of at least $O(k|L|log^2(k|L|))$ for $k$ input trees. Here, we show that the problem can be solved in $O(k|L|)$ time using a simple bottom-up algorithm called LinCR. An implementation of LinCR in Python is freely available at https://github.com/david-schaller/tralda.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا