ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal Las Vegas reduction from one-way set reconciliation to error correction

44   0   0.0 ( 0 )
 نشر من قبل Djamal Belazzougui
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Suppose we have two players $A$ and $C$, where player $A$ has a string $s[0..u-1]$ and player $C$ has a string $t[0..u-1]$ and none of the two players knows the others string. Assume that $s$ and $t$ are both over an integer alphabet $[sigma]$, where the first string contains $n$ non-zero entries. We would wish to answer to the following basic question. Assuming that $s$ and $t$ differ in at most $k$ positions, how many bits does player $A$ need to send to player $C$ so that he can recover $s$ with certainty? Further, how much time does player $A$ need to spend to compute the sent bits and how much time does player $C$ need to recover the string $s$? This problem has a certain number of applications, for example in databases, where each of the two parties possesses a set of $n$ key-value pairs, where keys are from the universe $[u]$ and values are from $[sigma]$ and usually $nll u$. In this paper, we show a time and message-size optimal Las Vegas reduction from this problem to the problem of systematic error correction of $k$ errors for strings of length $Theta(n)$ over an alphabet of size $2^{Theta(logsigma+log (u/n))}$. The additional running time incurred by the reduction is linear randomized for player $A$ and linear deterministic for player $B$, but the correction works with certainty. When using the popular Reed-Solomon codes, the reduction gives a protocol that transmits $O(k(log u+logsigma))$ bits and runs in time $O(ncdotmathrm{polylog}(n)(log u+logsigma))$ for all values of $k$. The time is randomized for player $A$ (encoding time) and deterministic for player $C$ (decoding time). The space is optimal whenever $kleq (usigma)^{1-Omega(1)}$.



قيم البحث

اقرأ أيضاً

Computing the convolution $Astar B$ of two length-$n$ integer vectors $A,B$ is a core problem in several disciplines. It frequently comes up in algorithms for Knapsack, $k$-SUM, All-Pairs Shortest Paths, and string pattern matching problems. For thes e applications it typically suffices to compute convolutions of nonnegative vectors. This problem can be classically solved in time $O(nlog n)$ using the Fast Fourier Transform. However, often the involved vectors are sparse and hence one could hope for output-sensitive algorithms to compute nonnegative convolutions. This question was raised by Muthukrishnan and solved by Cole and Hariharan (STOC 02) by a randomized algorithm running in near-linear time in the (unknown) output-size $t$. Chan and Lewenstein (STOC 15) presented a deterministic algorithm with a $2^{O(sqrt{log tcdotloglog n})}$ overhead in running time and the additional assumption that a small superset of the output is given; this assumption was later removed by Bringmann and Nakos (ICALP 21). In this paper we present the first deterministic near-linear-time algorithm for computing sparse nonnegative convolutions. This immediately gives improved deterministic algorithms for the state-of-the-art of output-sensitive Subset Sum, block-mass pattern matching, $N$-fold Boolean convolution, and others, matching up to log-factors the fastest known randomized algorithms for these problems. Our algorithm is a blend of algebraic and combinatorial ideas and techniques. Additionally, we provide two fast Las Vegas algorithms for computing sparse nonnegative convolutions. In particular, we present a simple $O(tlog^2t)$ time algorithm, which is an accessible alternative to Cole and Hariharans algorithm. We further refine this new algorithm to run in Las Vegas time $O(tlog tcdotloglog t)$, matching the running time of the dense case apart from the $loglog t$ factor.
Dealing with the NP-complete Dominating Set problem on undirected graphs, we demonstrate the power of data reduction by preprocessing from a theoretical as well as a practical side. In particular, we prove that Dominating Set restricted to planar gra phs has a so-called problem kernel of linear size, achieved by two simple and easy to implement reduction rules. Moreover, having implemented our reduction rules, first experiments indicate the impressive practical potential of these rules. Thus, this work seems to open up a new and prospective way how to cope with one of the most important problems in graph theory and combinatorial optimization.
We present a class of numerical algorithms which adapt a quantum error correction scheme to a channel model. Given an encoding and a channel model, it was previously shown that the quantum operation that maximizes the average entanglement fidelity ma y be calculated by a semidefinite program (SDP), which is a convex optimization. While optimal, this recovery operation is computationally difficult for long codes. Furthermore, the optimal recovery operation has no structure beyond the completely positive trace preserving (CPTP) constraint. We derive methods to generate structured channel-adapted error recovery operations. Specifically, each recovery operation begins with a projective error syndrome measurement. The algorithms to compute the structured recovery operations are more scalable than the SDP and yield recovery operations with an intuitive physical form. Using Lagrange duality, we derive performance bounds to certify near-optimality.
69 - Sisi Zhou , Liang Jiang 2019
For a generic set of Markovian noise models, the estimation precision of a parameter associated with the Hamiltonian is limited by the $1/sqrt{t}$ scaling where $t$ is the total probing time, in which case the maximal possible quantum improvement in the asymptotic limit of large $t$ is restricted to a constant factor. However, situations arise where the constant factor improvement could be significant, yet no effective quantum strategies are known. Here we propose an optimal approximate quantum error correction (AQEC) strategy asymptotically saturating the precision lower bound in the most general adaptive parameter estimation scheme where arbitrary and frequent quantum controls are allowed. We also provide an efficient numerical algorithm finding the optimal code. Finally, we consider highly-biased noise and show that using the optimal AQEC strategy, strong noises are fully corrected, while the estimation precision depends only on the strength of weak noises in the limiting case.
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which perfectly refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا