ترغب بنشر مسار تعليمي؟ اضغط هنا

Phoenix: A Formally Verified Regenerating Vault

102   0   0.0 ( 0 )
 نشر من قبل Uri Kirstein Mr.
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

An attacker that gains access to a cryptocurrency users private keys can perform any operation in her stead. Due to the decentralized nature of most cryptocurrencies, no entity can revert those operations. This is a central challenge for decentralized systems, illustrated by numerous high-profile heists. Vault contracts reduce this risk by introducing artificial delay on operations, allowing abortion by the contract owner during the delay. However, the theft of a key still renders the vault unusable and puts funds at risk. We introduce Phoenix, a novel contract architecture that allows the user to restore its security properties after key loss. Phoenix takes advantage of users ability to store keys in easily-available but less secure storage (tier-two) as well as more secure storage that is harder to access (tier-one). Unlike previous solutions, the user can restore Phoenix security after the theft of tier-two keys and does not lose funds despite losing keys in either tier. Phoenix also introduces a mechanism to reduce the damage an attacker can cause in case of a tier-one compromise. We formally specify Phoenixs required behavior and provide a prototype implementation of Phoenix as an Ethereum contract. Since such an implementation is highly sensitive and vulnerable to subtle bugs, we apply a formal verification tool to prove specific code properties and identify faults. We highlight a bug identified by the tool that could be exploited by an attacker to compromise Phoenix. After fixing the bug, the tool proved the low-level executable codes correctness.

قيم البحث

اقرأ أيضاً

We present Revel, a partially neural reinforcement learning (RL) framework for provably safe exploration in continuous state and action spaces. A key challenge for provably safe deep RL is that repeatedly verifying neural networks within a learning l oop is computationally infeasible. We address this challenge using two policy classes: a general, neurosymbolic class with approximate gradients and a more restricted class of symbolic policies that allows efficient verification. Our learning algorithm is a mirror descent over policies: in each iteration, it safely lifts a symbolic policy into the neurosymbolic space, performs safe gradient updates to the resulting policy, and projects the updated policy into the safe symbolic subset, all without requiring explicit verification of neural networks. Our empirical results show that Revel enforces safe exploration in many scenarios in which Constrained Policy Optimization does not, and that it can discover policies that outperform those learned through prior approaches to verified exploration.
Development of formal proofs of correctness of programs can increase actual and perceived reliability and facilitate better understanding of program specifications and their underlying assumptions. Tools supporting such development have been availabl e for over 40 years, but have only recently seen wide practical use. Projects based on construction of machine-checked formal proofs are now reaching an unprecedented scale, comparable to large software projects, which leads to new challenges in proof development and maintenance. Despite its increasing importance, the field of proof engineering is seldom considered in its own right; related theories, techniques, and tools span many fields and venues. This survey of the literature presents a holistic understanding of proof engineering for program correctness, covering impact in practice, foundations, proof automation, proof organization, and practical proof development.
In this paper we present new general convergence results about the behaviour of Distributed Bellman-Ford (DBF) family of routing protocols, which includes distance-vector protocols (e.g. RIP) and path-vector protocols (e.g. BGP). First, we propose a new algebraic model for abstract routing problems which has fewer primitives than previous models and can represent more expressive policy languages. The new model is also the first to allow concurrent reasoning about distance-vector and path-vector protocols. Second, we explicitly demonstrate how DBF routing protocols are instances of a larger class of asynchronous iterative algorithms, for which there already exist powerful results about convergence. These results allow us to build upon conditions previously shown by Sobrinho to be sufficient and necessary for the convergence of path-vector protocols and generalise and strengthen them in various ways: we show that, with a minor modification, they also apply to distance-vector protocols; we prove they guarantee that the final routing solution reached is unique, thereby eliminating the possibility of anomalies such as BGP wedgies; we relax the model of asynchronous communication, showing that the results still hold if routing messages can be lost, reordered, and duplicated. Thirdly, our model and our accompanying theoretical results have been fully formalised in the Agda theorem prover. The resulting library is a powerful tool for quickly prototyping and formally verifying new policy languages. As an example, we formally verify the correctness of a policy language with many of the features of BGP including communities, conditional policy, path-inflation and route filtering.
We consider the signatures $Sigma_m=(0,1,-,+, cdot, ^{-1})$ of meadows and $(Sigma_m, {mathbf s})$ of signed meadows. We give two complete axiomatizations of the equational theories of the real numbers with respect to these signatures. In the first case, we extend the axiomatization of zero-totalized fields by a single axiom scheme expressing formal realness; the second axiomatization presupposes an ordering. We apply these completeness results in order to obtain complete axiomatizations of the complex numbers.
A recent case study from AWS by Chong et al. proposes an effective methodology for Bounded Model Checking in industry. In this paper, we report on a follow up case study that explores the methodology from the perspective of three research questions: (a) can proof artifacts be used across verification tools; (b) are there bugs in verified code; and (c) can specifications be improved. To study these questions, we port the verification tasks for $texttt{aws-c-common}$ library to SEAHORN and KLEE. We show the benefits of using compiler semantics and cross-checking specifications with different verification techniques, and call for standardizing proof library extensions to increase specification reuse. The verification tasks discussed are publicly available online.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا