ﻻ يوجد ملخص باللغة العربية
The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regevs famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS02). We also discuss a connection between eDCP and Childs and Van Dams algorithm for generalized hidden shift problems (SODA07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.
In this work, we study a generalization of hidden subspace states to hidden coset states (first introduced by Aaronson and Christiano [STOC 12]). This notion was considered independently by Vidick and Zhang [Eurocrypt 21], in the context of proofs of
How can multiple distributed entities collaboratively train a shared deep net on their private data while preserving privacy? This paper introduces InstaHide, a simple encryption of training images, which can be plugged into existing distributed deep
We study the hardness of the dihedral hidden subgroup problem. It is known that lattice problems reduce to it, and that it reduces to random subset sum with density $> 1$ and also to quantum sampling subset sum solutions. We examine a decision versio
The key transform of the REESSE1+ asymmetrical cryptosystem is Ci = (Ai * W ^ l(i)) ^ d (% M) with l(i) in Omega = {5, 7, ..., 2n + 3} for i = 1, ..., n, where l(i) is called a lever function. In this paper, the authors give a simplified key transfor
The Reward Prediction Error hypothesis proposes that phasic activity in the midbrain dopaminergic system reflects prediction errors needed for learning in reinforcement learning. Besides the well-documented association between dopamine and reward pro