ترغب بنشر مسار تعليمي؟ اضغط هنا

Approximating the Influence of a monotone Boolean function in O(sqrt{n}) query complexity

392   0   0.0 ( 0 )
 نشر من قبل Omri Weinstein
 تاريخ النشر 2011
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The {em Total Influence} ({em Average Sensitivity) of a discrete function is one of its fundamental measures. We study the problem of approximating the total influence of a monotone Boolean function ifnumplusminus=1 $f: {pm1}^n longrightarrow {pm1}$, else $f: bitset^n to bitset$, fi which we denote by $I[f]$. We present a randomized algorithm that approximates the influence of such functions to within a multiplicative factor of $(1pm eps)$ by performing $O(frac{sqrt{n}log n}{I[f]} poly(1/eps)) $ queries. % mnote{D: say something about technique?} We also prove a lower bound of % $Omega(frac{sqrt{n/log n}}{I[f]})$ $Omega(frac{sqrt{n}}{log n cdot I[f]})$ on the query complexity of any constant-factor approximation algorithm for this problem (which holds for $I[f] = Omega(1)$), % and $I[f] = O(sqrt{n}/log n)$), hence showing that our algorithm is almost optimal in terms of its dependence on $n$. For general functions we give a lower bound of $Omega(frac{n}{I[f]})$, which matches the complexity of a simple sampling algorithm.



قيم البحث

اقرأ أيضاً

We study the problem of maximizing a monotone $k$-submodular function $f$ under a knapsack constraint, where a $k$-submodular function is a natural generalization of a submodular function to $k$ dimensions. We present a deterministic $(frac12-frac{1} {2e})$-approximation algorithm that evaluates $f$ $O(n^5k^4)$ times.
Submodular maximization is a general optimization problem with a wide range of applications in machine learning (e.g., active learning, clustering, and feature selection). In large-scale optimization, the parallel running time of an algorithm is gove rned by its adaptivity, which measures the number of sequential rounds needed if the algorithm can execute polynomially-many independent oracle queries in parallel. While low adaptivity is ideal, it is not sufficient for an algorithm to be efficient in practice---there are many applications of distributed submodular optimization where the number of function evaluations becomes prohibitively expensive. Motivated by these applications, we study the adaptivity and query complexity of submodular maximization. In this paper, we give the first constant-factor approximation algorithm for maximizing a non-monotone submodular function subject to a cardinality constraint $k$ that runs in $O(log(n))$ adaptive rounds and makes $O(n log(k))$ oracle queries in expectation. In our empirical study, we use three real-world applications to compare our algorithm with several benchmarks for non-monotone submodular maximization. The results demonstrate that our algorithm finds competitive solutions using significantly fewer rounds and queries.
We consider the well-studied problem of finding a perfect matching in $d$-regular bipartite graphs with $2n$ vertices and $m = nd$ edges. While the best-known algorithm for general bipartite graphs (due to Hopcroft and Karp) takes $O(m sqrt{n})$ time , in regular bipartite graphs, a perfect matching is known to be computable in $O(m)$ time. Very recently, the $O(m)$ bound was improved to $O(min{m, frac{n^{2.5}ln n}{d}})$ expected time, an expression that is bounded by $tilde{O}(n^{1.75})$. In this paper, we further improve this result by giving an $O(min{m, frac{n^2ln^3 n}{d}})$ expected time algorithm for finding a perfect matching in regular bipartite graphs; as a function of $n$ alone, the algorithm takes expected time $O((nln n)^{1.5})$. To obtain this result, we design and analyze a two-stage sampling scheme that reduces the problem of finding a perfect matching in a regular bipartite graph to the same problem on a subsampled bipartite graph with $O(nln n)$ edges that has a perfect matching with high probability. The matching is then recovered using the Hopcroft-Karp algorithm. While the standard analysis of Hopcroft-Karp gives us an $tilde{O}(n^{1.5})$ running time, we present a tighter analysis for our special case that results in the stronger $tilde{O}(min{m, frac{n^2}{d} })$ time mentioned earlier. Our proof of correctness of this sampling scheme uses a new correspondence theorem between cuts and Halls theorem ``witnesses for a perfect matching in a bipartite graph that we prove. We believe this theorem may be of independent interest; as another example application, we show that a perfect matching in the support of an $n times n$ doubly stochastic matrix with $m$ non-zero entries can be found in expected time $tilde{O}(m + n^{1.5})$.
We study variants of Mastermind, a popular board game in which the objective is sequence reconstruction. In this two-player game, the so-called textit{codemaker} constructs a hidden sequence $H = (h_1, h_2, ldots, h_n)$ of colors selected from an alp habet $mathcal{A} = {1,2,ldots, k}$ (textit{i.e.,} $h_iinmathcal{A}$ for all $iin{1,2,ldots, n}$). The game then proceeds in turns, each of which consists of two parts: in turn $t$, the second player (the textit{codebreaker}) first submits a query sequence $Q_t = (q_1, q_2, ldots, q_n)$ with $q_iin mathcal{A}$ for all $i$, and second receives feedback $Delta(Q_t, H)$, where $Delta$ is some agreed-upon function of distance between two sequences with $n$ components. The game terminates when $Q_t = H$, and the codebreaker seeks to end the game in as few turns as possible. Throughout we let $f(n,k)$ denote the smallest integer such that the codebreaker can determine any $H$ in $f(n,k)$ turns. We prove three main results: First, when $H$ is known to be a permutation of ${1,2,ldots, n}$, we prove that $f(n, n)ge n - loglog n$ for all sufficiently large $n$. Second, we show that Knuths Minimax algorithm identifies any $H$ in at most $nk$ queries. Third, when feedback is not received until all queries have been submitted, we show that $f(n,k)=Omega(nlog k)$.
Sorting a Permutation by Transpositions (SPbT) is an important problem in Bioinformtics. In this paper, we improve the running time of the best known approximation algorithm for SPbT. We use the permutation tree data structure of Feng and Zhu and imp rove the running time of the 1.375 Approximation Algorithm for SPbT of Elias and Hartman to $O(nlog n)$. The previous running time of EH algorithm was $O(n^2)$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا