ترغب بنشر مسار تعليمي؟ اضغط هنا

Direct Sum Testing: The General Case

119   0   0.0 ( 0 )
 نشر من قبل Konstantin Golubev
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A function $f:[n_1]timesdotstimes[n_d]tomathbb{F}_2$ is a direct sum if it is of the form $fleft(a_1,dots,a_dright) = f_1(a_1)oplusdots oplus f_d (a_d),$ for some $d$ functions $f_i:[n_i]tomathbb{F}_2$ for all $i=1,dots, d$, and where $n_1,dots,n_dinmathbb{N}$. We present a $4$-query test which distinguishes between direct sums and functions that are far from them. The test relies on the BLR linearity test (Blum, Luby, Rubinfeld, 1993) and on an agreement test which slightly generalizes the direct product test (Dinur, Steurer, 2014). In multiplicative $pm 1$ notation, our result reads as follows. A $d$-dimensional tensor with $pm 1$ entries is called a tensor product if it is a tensor product of $d$ vectors with $pm 1$ entries, or equivalently, if it is of rank $1$. The presented tests can be read as tests for distinguishing between tensor products and tensors that are far from being tensor products. We also present a different test, which queries the function at most $(d+2)$ times, but is easier to analyze.

قيم البحث

اقرأ أيضاً

In this work, we show the first worst-case to average-case reduction for the classical $k$-SUM problem. A $k$-SUM instance is a collection of $m$ integers, and the goal of the $k$-SUM problem is to find a subset of $k$ elements that sums to $0$. In t he average-case version, the $m$ elements are chosen uniformly at random from some interval $[-u,u]$. We consider the total setting where $m$ is sufficiently large (with respect to $u$ and $k$), so that we are guaranteed (with high probability) that solutions must exist. Much of the appeal of $k$-SUM, in particular connections to problems in computational geometry, extends to the total setting. The best known algorithm in the average-case total setting is due to Wagner (following the approach of Blum-Kalai-Wasserman), and achieves a run-time of $u^{O(1/log k)}$. This beats the known (conditional) lower bounds for worst-case $k$-SUM, raising the natural question of whether it can be improved even further. However, in this work, we show a matching average-case lower-bound, by showing a reduction from worst-case lattice problems, thus introducing a new family of techniques into the field of fine-grained complexity. In particular, we show that any algorithm solving average-case $k$-SUM on $m$ elements in time $u^{o(1/log k)}$ will give a super-polynomial improvement in the complexity of algorithms for lattice problems.
The reassembling of a simple connected graph G = (V,E) is an abstraction of a problem arising in earlier studies of network analysis. The reassembling process has a simple formulation (there are several equivalent formulations) relative to a binary t ree B (reassembling tree), with root node at the top and $n$ leaf nodes at the bottom, where every cross-section corresponds to a partition of V such that: - the bottom (or first) cross-section (all the leaves) is the finest partition of V with n one-vertex blocks, - the top (or last) cross-section (the root) is the coarsest partition with a single block, the entire set V, - a node (or block) in an intermediate cross-section (or partition) is the result of merging its two children nodes (or blocks) in the cross-section (or partition) below it. The maximum edge-boundary degree encountered during the reassembling process is what we call the alpha-measure of the reassembling, and the sum of all edge-boundary degrees is its beta-measure. The alpha-optimization (resp. beta-optimization) of the reassembling of G is to determine a reassembling tree B that minimizes its alpha-measure (resp. beta-measure). There are different forms of reassembling. In an earlier report, we studied linear reassembling, which is the case when the height of B is (n-1). In this report, we study balanced reassembling, when B has height [log n]. The two main results in this report are the NP-hardness of alpha-optimization and beta-optimization of balanced reassembling. The first result is obtained by a sequence of polynomial-time reductions from minimum bisection of graphs (known to be NP-hard), and the second by a sequence of polynomial-time reductions from clique cover of graphs (known to be NP-hard).
Dawar and Wilsenach (ICALP 2020) introduce the model of symmetric arithmetic circuits and show an exponential separation between the sizes of symmetric circuits for computing the determinant and the permanent. The symmetry restriction is that the cir cuits which take a matrix input are unchanged by a permutation applied simultaneously to the rows and columns of the matrix. Under such restrictions we have polynomial-size circuits for computing the determinant but no subexponential size circuits for the permanent. Here, we consider a more stringent symmetry requirement, namely that the circuits are unchanged by arbitrary even permutations applied separately to rows and columns, and prove an exponential lower bound even for circuits computing the determinant. The result requires substantial new machinery. We develop a general framework for proving lower bounds for symmetric circuits with restricted symmetries, based on a new support theorem and new two-player restricted bijection games. These are applied to the determinant problem with a novel construction of matrices that are bi-adjacency matrices of graphs based on the CFI construction. Our general framework opens the way to exploring a variety of symmetry restrictions and studying trade-offs between symmetry and other resources used by arithmetic circuits.
We extend the definitions of complexity measures of functions to domains such as the symmetric group. The complexity measures we consider include degree, approximate degree, decision tree complexity, sensitivity, block sensitivity, and a few others. We show that these complexity measures are polynomially related for the symmetric group and for many other domains. To show that all measures but sensitivity are polynomially related, we generalize classical arguments of Nisan and others. To add sensitivity to the mix, we reduce to Huangs sensitivity theorem using pseudo-characters, which witness the degree of a function. Using similar ideas, we extend the characterization of Boolean degree 1 functions on the symmetric group due to Ellis, Friedgut and Pilpel to the perfect matching scheme. As another application of our ideas, we simplify the characterization of maximum-size $t$-intersecting families in the symmetric group and the perfect matching scheme.
56 - Harry Buhrman 1998
The incompressibility method is an elementary yet powerful proof technique. It has been used successfully in many areas. To further demonstrate its power and elegance we exhibit new simple proofs using the incompressibility method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا