ترغب بنشر مسار تعليمي؟ اضغط هنا

Learning With Errors and Extrapolated Dihedral Cosets

227   0   0.0 ( 0 )
 نشر من قبل Elena Kirshanova
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regevs famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS02). We also discuss a connection between eDCP and Childs and Van Dams algorithm for generalized hidden shift problems (SODA07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.



قيم البحث

اقرأ أيضاً

In this work, we study a generalization of hidden subspace states to hidden coset states (first introduced by Aaronson and Christiano [STOC 12]). This notion was considered independently by Vidick and Zhang [Eurocrypt 21], in the context of proofs of quantum knowledge from quantum money schemes. We explore unclonable properties of coset states and several applications: - We show that assuming indistinguishability obfuscation (iO), hidden coset states possess a certain direct product hardness property, which immediately implies a tokenized signature scheme in the plain model. Previously, it was known only relative to an oracle, from a work of Ben-David and Sattath [QCrypt 17]. - Combining a tokenized signature scheme with extractable witness encryption, we give a construction of an unclonable decryption scheme in the plain model. The latter primitive was recently proposed by Georgiou and Zhandry [ePrint 20], who gave a construction relative to a classical oracle. - We conjecture that coset states satisfy a certain natural (information-theoretic) monogamy-of-entanglement property. Assuming this conjecture is true, we remove the requirement for extractable witness encryption in our unclonable decryption construction, by relying instead on compute-and-compare obfuscation for the class of unpredictable distributions. - Finally, we give a construction of a copy-protection scheme for pseudorandom functions (PRFs) in the plain model. Our scheme is secure either assuming iO, OWF, and extractable witness encryption, or assuming iO, OWF, compute-and-compare obfuscation for the class of unpredictable distributions, and the conjectured monogamy property mentioned above. This is the first example of a copy-protection scheme with provable security in the plain model for a class of functions that is not evasive.
How can multiple distributed entities collaboratively train a shared deep net on their private data while preserving privacy? This paper introduces InstaHide, a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines. The encryption is efficient and applying it during training has minor effect on test accuracy. InstaHide encrypts each training image with a one-time secret key which consists of mixing a number of randomly chosen images and applying a random pixel-wise mask. Other contributions of this paper include: (a) Using a large public dataset (e.g. ImageNet) for mixing during its encryption, which improves security. (b) Experimental results to show effectiveness in preserving privacy against known attacks with only minor effects on accuracy. (c) Theoretical analysis showing that successfully attacking privacy requires attackers to solve a difficult computational problem. (d) Demonstrating that use of the pixel-wise mask is important for security, since Mixup alone is shown to be insecure to some some efficient attacks. (e) Release of a challenge dataset https://github.com/Hazelsuko07/InstaHide_Challenge Our code is available at https://github.com/Hazelsuko07/InstaHide
We study the hardness of the dihedral hidden subgroup problem. It is known that lattice problems reduce to it, and that it reduces to random subset sum with density $> 1$ and also to quantum sampling subset sum solutions. We examine a decision versio n of the problem where the question asks whether the hidden subgroup is trivial or order two. The decision problem essentially asks if a given vector is in the span of all coset states. We approach this by first computing an explicit basis for the coset space and the perpendicular space. We then look at the consequences of having efficient unitaries that use this basis. We show that if a unitary maps the basis to the standard basis in any way, then that unitary can be used to solve random subset sum with constant density $>1$. We also show that if a unitary can exactly decide membership in the coset subspace, then the collision problem for subset sum can be solved for density $>1$ but approaching $1$ as the problem size increases. This strengthens the previous hardness result that implementing the optimal POVM in a specific way is as hard as quantum sampling subset sum solutions.
The key transform of the REESSE1+ asymmetrical cryptosystem is Ci = (Ai * W ^ l(i)) ^ d (% M) with l(i) in Omega = {5, 7, ..., 2n + 3} for i = 1, ..., n, where l(i) is called a lever function. In this paper, the authors give a simplified key transfor m Ci = Ai * W ^ l(i) (% M) with a new lever function l(i) from {1, ..., n} to Omega = {+/-5, +/-6, ..., +/-(n + 4)}, where +/- means the selection of the + or - sign. Discuss the necessity of the new l(i), namely that a simplified private key is insecure if the new l(i) is a constant but not one-to-one function. Further, expound the sufficiency of the new l(i) from four aspects: (1) indeterminacy of the new l(i), (2) insufficient conditions for neutralizing the powers of W and W ^-1 even if Omega = {5, 6, ..., n + 4}, (3) verification by examples, and (4) running times of the continued fraction attack and W-parameter intersection attack which are the two most efficient of the probabilistic polytime attack algorithms so far. Last, the authors elaborate the relation between a lever function and a random oracle.
The Reward Prediction Error hypothesis proposes that phasic activity in the midbrain dopaminergic system reflects prediction errors needed for learning in reinforcement learning. Besides the well-documented association between dopamine and reward pro cessing, dopamine is implicated in a variety of functions without a clear relationship to reward prediction error. Fluctuations in dopamine levels influence the subjective perception of time, dopamine bursts precede the generation of motor responses, and the dopaminergic system innervates regions of the brain, including hippocampus and areas in prefrontal cortex, whose function is not uniquely tied to reward. In this manuscript, we propose that a common theme linking these functions is representation, and that prediction errors signaled by the dopamine system, in addition to driving associative learning, can also support the acquisition of adaptive state representations. In a series of simulations, we show how this extension can account for the role of dopamine in temporal and spatial representation, motor response, and abstract categorization tasks. By extending the role of dopamine signals to learning state representations, we resolve a critical challenge to the Reward Prediction Error hypothesis of dopamine function.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا