ترغب بنشر مسار تعليمي؟ اضغط هنا

Proof-of-work certificates that can be efficiently computed in the cloud

126   0   0.0 ( 0 )
 نشر من قبل Jean-Guillaume Dumas
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In an emerging computing paradigm, computational capabilities, from processing power to storage capacities, are offered to users over communication networks as a cloud-based service. There, demanding computations are outsourced in order to limit infrastructure costs. The idea of verifiable computing is to associate a data structure, a proof-of-work certificate, to the result of the outsourced computation. This allows a verification algorithm to prove the validity of the result, faster than by recomputing it. We talk about a Prover (the server performing the computations) and a Verifier. Goldwasser, Kalai and Rothblum gave in 2008 a generic method to verify any parallelizable computation, in almost linear time in the size of the, potentially structured, inputs and the result. However, the extra cost of the computations for the Prover (and therefore the extra cost to the customer), although only almost a constant factor of the overall work, is nonetheless prohibitive in practice. Differently, we will here present problem-specific procedures in computer algebra, e.g. for exact linear algebra computations, that are Prover-optimal, that is that have much less financial overhead.



قيم البحث

اقرأ أيضاً

In 1998 the second author proved that there is an $epsilon>0$ such that every graph satisfies $chi leq lceil (1-epsilon)(Delta+1)+epsilonomegarceil$. The first author recently proved that any graph satisfying $omega > frac 23(Delta+1)$ contains a sta ble set intersecting every maximum clique. In this note we exploit the latter result to give a much shorter, simpler proof of the former. We include, as a certificate of simplicity, an appendix that proves all intermediate results with the exception of Halls Theorem, Brooks Theorem, the Lovasz Local Lemma, and Talagrands Inequality.
Computational problem certificates are additional data structures for each output, which can be used by a-possibly randomized-verification algorithm that proves the correctness of each output. In this paper, we give an algorithm that computes a certi ficate for the minimal polynomial of sparse or structured nxn matrices over an abstract field, of sufficiently large cardinality, whose Monte Carlo verification complexity requires a single matrix-vector multiplication and a linear number of extra field operations. We also propose a novel preconditioner that ensures irreducibility of the characteristic polynomial of the generically preconditioned matrix. This preconditioner takes linear time to be applied and uses only two random entries. We then combine these two techniques to give algorithms that compute certificates for the determinant, and thus for the characteristic polynomial, whose Monte Carlo verification complexity is therefore also linear.
As machine learning models are increasingly used in critical decision-making settings (e.g., healthcare, finance), there has been a growing emphasis on developing methods to explain model predictions. Such textit{explanations} are used to understand and establish trust in models and are vital components in machine learning pipelines. Though explanations are a critical piece in these systems, there is little understanding about how they are vulnerable to manipulation by adversaries. In this paper, we discuss how two broad classes of explanations are vulnerable to manipulation. We demonstrate how adversaries can design biased models that manipulate model agnostic feature attribution methods (e.g., LIME & SHAP) and counterfactual explanations that hill-climb during the counterfactual search (e.g., Wachters Algorithm & DiCE) into textit{concealing} the models biases. These vulnerabilities allow an adversary to deploy a biased model, yet explanations will not reveal this bias, thereby deceiving stakeholders into trusting the model. We evaluate the manipulations on real world data sets, including COMPAS and Communities & Crime, and find explanations can be manipulated in practice.
An important feature of Proof-of-Work (PoW) blockchains is full dynamic availability, allowing miners to go online and offline while requiring only 50% of the online miners to be honest. Existing Proof-of-stake (PoS), Proof-of-Space and related pro tocols are able to achieve this property only partially, either putting the additional assumption that adversary nodes to be online from the beginning and no new adversary nodes come online afterwards, or use additional trust assumptions for newly joining nodes.We propose a new PoS protocol PoSAT which can provably achieve dynamic availability fully without any additional assumptions. The protocol is based on the longest chain and uses a Verifiable Delay Function for the block proposal lottery to provide an arrow of time. The security analysis of the protocol draws on the recently proposed technique of Nakamoto blocks as well as the theory of branching random walks. An additional feature of PoSAT is the complete unpredictability of who will get to propose a block next, even by the winner itself. This unpredictability is at the same level of PoW protocols, and is stronger than that of existing PoS protocols using Verifiable Random Functions.
256 - Francois Henault 2013
Nowadays, it is commonly admitted that the experimental violation of Bells inequalities that was successfully demonstrated in the last decades by many experimenters, are indeed the ultimate proof of quantum physics and of its ability to describe the whole microscopic world and beyond. But the historical and scientific story may not be envisioned so clearly: it starts with the original paper of Einstein, Podolsky and Rosen (EPR) aiming at demonstrating that the formalism of quantum theory is incomplete. It then goes through the works of D. Bohm, to finally proceed to the famous John Bells relationships providing an experimental setup to solve the EPR paradox. In this communication is proposed an alternative reading of this history, showing that modern experiments based on correlations between light polarizations significantly deviate from the original spirit of the EPR paper. It is concluded that current experimental violations of Bells inequalities cannot be considered as an ultimate proof of the completeness of quantum physics models.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا