ترغب بنشر مسار تعليمي؟ اضغط هنا

The complexity of high-dimensional cuts

202   0   0.0 ( 0 )
 نشر من قبل Meirav Zehavi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Cut problems form one of the most fundamental classes of problems in algorithmic graph theory. For instance, the minimum cut, the minimum $s$-$t$ cut, the minimum multiway cut, and the minimum $k$-way cut are some of the commonly encountered cut problems. Many of these problems have been extensively studied over several decades. In this paper, we initiate the algorithmic study of some cut problems in high dimensions. The first problem we study, namely, Topological Hitting Set (THS), is defined as follows: Given a nontrivial $r$-cycle $zeta$ in a simplicial complex $mathsf{K}$, find a set $mathcal{S}$ of $r$-dimensional simplices of minimum cardinality so that $mathcal{S}$ meets every cycle homologous to $zeta$. Our main result is that this problem admits a polynomial-time solution on triangulations of closed surfaces. Interestingly, the optimal solution is given in terms of the cocycles of the surface. For general complexes, we show that THS is W[1]-hard with respect to the solution size $k$. On the positive side, we show that THS admits an FPT algorithm with respect to $k+d$, where $d$ is the maximum degree of the Hasse graph of the complex $mathsf{K}$. We also define a problem called Boundary Nontrivialization (BNT): Given a bounding $r$-cycle $zeta$ in a simplicial complex $mathsf{K}$, find a set $mathcal{S}$ of $(r+1)$-dimensional simplices of minimum cardinality so that the removal of $mathcal{S}$ from $mathsf{K}$ makes $zeta$ non-bounding. We show that BNT is W[1]-hard with respect to the solution size as the parameter, and has an $O(log n)$-approximation FPT algorithm for $(r+1)$-dimensional complexes with the $(r+1)$-th Betti number $beta_{r+1}$ as the parameter. Finally, we provide randomized (approximation) FPT algorithms for the global variants of THS and BNT.



قيم البحث

اقرأ أيضاً

We investigate the parameterized complexity in $a$ and $b$ of determining whether a graph~$G$ has a subset of $a$ vertices and $b$ edges whose removal disconnects $G$, or disconnects two prescribed vertices $s, t in V(G)$.
Recently Ermon et al. (2013) pioneered a way to practically compute approximations to large scale counting or discrete integration problems by using random hashes. The hashes are used to reduce the counting problem into many separate discrete optimiz ation problems. The optimization problems then can be solved by an NP-oracle such as commercial SAT solvers or integer linear programming (ILP) solvers. In particular, Ermon et al. showed that if the domain of integration is ${0,1}^n$ then it is possible to obtain a solution within a factor of $16$ of the optimal (a 16-approximation) by this technique. In many crucial counting tasks, such as computation of partition function of ferromagnetic Potts model, the domain of integration is naturally ${0,1,dots, q-1}^n, q>2$, the hypergrid. The straightforward extension of Ermon et al.s method allows a $q^2$-approximation for this problem. For large values of $q$, this is undesirable. In this paper, we show an improved technique to obtain an approximation factor of $4+O(1/q^2)$ to this problem. We are able to achieve this by using an idea of optimization over multiple bins of the hash functions, that can be easily implemented by inequality constraints, or even in unconstrained way. Also the burden on the NP-oracle is not increased by our method (an ILP solver can still be used). We provide experimental simulation results to support the theoretical guarantees of our algorithms.
Estimation is the computational task of recovering a hidden parameter $x$ associated with a distribution $D_x$, given a measurement $y$ sampled from the distribution. High dimensional estimation problems arise naturally in statistics, machine learnin g, and complexity theory. Many high dimensional estimation problems can be formulated as systems of polynomial equations and inequalities, and thus give rise to natural probability distributions over polynomial systems. Sum-of-squares proofs provide a powerful framework to reason about polynomial systems, and further there exist efficient algorithms to search for low-degree sum-of-squares proofs. Understanding and characterizing the power of sum-of-squares proofs for estimation problems has been a subject of intense study in recent years. On one hand, there is a growing body of work utilizing sum-of-squares proofs for recovering solutions to polynomial systems when the system is feasible. On the other hand, a general technique referred to as pseudocalibration has been developed towards showing lower bounds on the degree of sum-of-squares proofs. Finally, the existence of sum-of-squares refutations of a polynomial system has been shown to be intimately connected to the existence of spectral algorithms. In this article we survey these developments.
We study the space complexity of solving the bias-regularized SVM problem in the streaming model. This is a classic supervised learning problem that has drawn lots of attention, including for developing fast algorithms for solving the problem approxi mately. One of the most widely used algorithms for approximately optimizing the SVM objective is Stochastic Gradient Descent (SGD), which requires only $O(frac{1}{lambdaepsilon})$ random samples, and which immediately yields a streaming algorithm that uses $O(frac{d}{lambdaepsilon})$ space. For related problems, better streaming algorithms are only known for smooth functions, unlike the SVM objective that we focus on in this work. We initiate an investigation of the space complexity for both finding an approximate optimum of this objective, and for the related ``point estimation problem of sketching the data set to evaluate the function value $F_lambda$ on any query $(theta, b)$. We show that, for both problems, for dimensions $d=1,2$, one can obtain streaming algorithms with space polynomially smaller than $frac{1}{lambdaepsilon}$, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM, and which is known to be tight in general, even for $d=1$. We also prove polynomial lower bounds for both point estimation and optimization. In particular, for point estimation we obtain a tight bound of $Theta(1/sqrt{epsilon})$ for $d=1$ and a nearly tight lower bound of $widetilde{Omega}(d/{epsilon}^2)$ for $d = Omega( log(1/epsilon))$. Finally, for optimization, we prove a $Omega(1/sqrt{epsilon})$ lower bound for $d = Omega( log(1/epsilon))$, and show similar bounds when $d$ is constant.
We consider the problem of scattering $n$ robots in a two dimensional continuous space. As this problem is impossible to solve in a deterministic manner, all solutions must be probabilistic. We investigate the amount of randomness (that is, the numbe r of random bits used by the robots) that is required to achieve scattering. We first prove that $n log n$ random bits are necessary to scatter $n$ robots in any setting. Also, we give a sufficient condition for a scattering algorithm to be random bit optimal. As it turns out that previous solutions for scattering satisfy our condition, they are hence proved random bit optimal for the scattering problem. Then, we investigate the time complexity of scattering when strong multiplicity detection is not available. We prove that such algorithms cannot converge in constant time in the general case and in $o(log log n)$ rounds for random bits optimal scattering algorithms. However, we present a family of scattering algorithms that converge as fast as needed without using multiplicity detection. Also, we put forward a specific protocol of this family that is random bit optimal ($n log n$ random bits are used) and time optimal ($log log n$ rounds are used). This improves the time complexity of previous results in the same setting by a $log n$ factor. Aside from characterizing the random bit complexity of mobile robot scattering, our study also closes its time complexity gap with and without strong multiplicity detection (that is, $O(1)$ time complexity is only achievable when strong multiplicity detection is available, and it is possible to approach it as needed otherwise).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا