Do you want to publish a course? Click here

New Applications of the Incompressibility Method: Part II

57   0   0.0 ( 0 )
 Added by Tao Jiang
 Publication date 1998
and research's language is English
 Authors Harry Buhrman




Ask ChatGPT about the research

The incompressibility method is an elementary yet powerful proof technique. It has been used successfully in many areas. To further demonstrate its power and elegance we exhibit new simple proofs using the incompressibility method.

rate research

Read More

The reassembling of a simple connected graph G = (V,E) is an abstraction of a problem arising in earlier studies of network analysis. The reassembling process has a simple formulation (there are several equivalent formulations) relative to a binary tree B (reassembling tree), with root node at the top and $n$ leaf nodes at the bottom, where every cross-section corresponds to a partition of V such that: - the bottom (or first) cross-section (all the leaves) is the finest partition of V with n one-vertex blocks, - the top (or last) cross-section (the root) is the coarsest partition with a single block, the entire set V, - a node (or block) in an intermediate cross-section (or partition) is the result of merging its two children nodes (or blocks) in the cross-section (or partition) below it. The maximum edge-boundary degree encountered during the reassembling process is what we call the alpha-measure of the reassembling, and the sum of all edge-boundary degrees is its beta-measure. The alpha-optimization (resp. beta-optimization) of the reassembling of G is to determine a reassembling tree B that minimizes its alpha-measure (resp. beta-measure). There are different forms of reassembling. In an earlier report, we studied linear reassembling, which is the case when the height of B is (n-1). In this report, we study balanced reassembling, when B has height [log n]. The two main results in this report are the NP-hardness of alpha-optimization and beta-optimization of balanced reassembling. The first result is obtained by a sequence of polynomial-time reductions from minimum bisection of graphs (known to be NP-hard), and the second by a sequence of polynomial-time reductions from clique cover of graphs (known to be NP-hard).
In analogy with the regularity lemma of Szemeredi, regularity lemmas for polynomials shown by Green and Tao (Contrib. Discrete Math. 2009) and by Kaufman and Lovett (FOCS 2008) modify a given collection of polynomials calF = {P_1,...,P_m} to a new collection calF so that the polynomials in calF are pseudorandom. These lemmas have various applications, such as (special cases) of Reed-Muller testing and worst-case to average-case reductions for polynomials. However, the transformation from calF to calF is not algorithmic for either regularity lemma. We define new notions of regularity for polynomials, which are analogous to the above, but which allow for an efficient algorithm to compute the pseudorandom collection calF. In particular, when the field is of high characteristic, in polynomial time, we can refine calF into calF where every nonzero linear combination of polynomials in calF has desirably small Gowers norm. Using the algorithmic regularity lemmas, we show that if a polynomial P of degree d is within (normalized) Hamming distance 1-1/|F| -eps of some unknown polynomial of degree k over a prime field F (for k < d < |F|), then there is an efficient algorithm for finding a degree-k polynomial Q, which is within distance 1-1/|F| -eta of P, for some eta depending on eps. This can be thought of as decoding the Reed-Muller code of order k beyond the list decoding radius (finding one close codeword), when the received word P itself is a polynomial of degree d (with k < d < |F|). We also obtain an algorithmic version of the worst-case to average-case reductions by Kaufman and Lovett. They show that if a polynomial of degree d can be weakly approximated by a polynomial of lower degree, then it can be computed exactly using a collection of polynomials of degree at most d-1. We give an efficient (randomized) algorithm to find this collection.
For every total recursive time bound $t$, a constant fraction of all compressible (low Kolmogorov complexity) strings is $t$-bounded incompressible (high time-bounded Kolmogorov complexity); there are uncountably many infinite sequences of which every initial segment of length $n$ is compressible to $log n$ yet $t$-bounded incompressible below ${1/4}n - log n$; and there are countable infinitely many recursive infinite sequence of which every initial segment is similarly $t$-bounded incompressible. These results are related to, but different from, Barzdinss lemma.
Dempster-Shafer Theory (DST) generalizes Bayesian probability theory, offering useful additional information, but suffers from a high computational burden. A lot of work has been done to reduce the complexity of computations used in information fusion with Dempsters rule. The main approaches exploit either the structure of Boolean lattices or the information contained in belief sources. Each has its merits depending on the situation. In this paper, we propose sequences of graphs for the computation of the zeta and Mobius transformations that optimally exploit both the structure of distributive semilattices and the information contained in belief sources. We call them the Efficient Mobius Transformations (EMT). We show that the complexity of the EMT is always inferior to the complexity of algorithms that consider the whole lattice, such as the Fast Mobius Transform (FMT) for all DST transformations. We then explain how to use them to fuse two belief sources. More generally, our EMTs apply to any function in any finite distributive lattice, focusing on a meet-closed or join-closed subset. This article extends our work published at the international conference on Scalable Uncertainty Management (SUM). It clarifies it, brings some minor corrections and provides implementation details such as data structures and algorithms applied to DST.
66 - Benjamin Grimmer 2021
The first part of this work established the foundations of a radial duality between nonnegative optimization problems, inspired by the work of (Renegar, 2016). Here we utilize our radial duality theory to design and analyze projection-free optimization algorithms that operate by solving a radially dual problem. In particular, we consider radial subgradient, smoothing, and accelerated methods that are capable of solving a range of constrained convex and nonconvex optimization problems and that can scale-up more efficiently than their classic counterparts. These algorithms enjoy the same benefits as their predecessors, avoiding Lipschitz continuity assumptions and costly orthogonal projections, in our newfound, broader context. Our radial duality further allows us to understand the effects and benefits of smoothness and growth conditions on the radial dual and consequently on our radial algorithms.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا