Do you want to publish a course? Click here

Training Massive Deep Neural Networks in a Smart Contract: A New Hope

136   0   0.0 ( 0 )
 Added by Yin Yang
 Publication date 2021
and research's language is English
 Authors Yin Yang




Ask ChatGPT about the research

Deep neural networks (DNNs) could be very useful in blockchain applications such as DeFi and NFT trading. However, training / running large-scale DNNs as part of a smart contract is infeasible on todays blockchain platforms, due to two fundamental design issues of these platforms. First, blockchains nowadays typically require that each node maintain the complete world state at any time, meaning that the node must execute all transactions in every block. This is prohibitively expensive for computationally intensive smart contracts involving DNNs. Second, existing blockchain platforms expect smart contract transactions to have deterministic, reproducible results and effects. In contrast, DNNs are usually trained / run lock-free on massively parallel computing devices such as GPUs, TPUs and / or computing clusters, which often do not yield deterministic results. This paper proposes novel platform designs, collectively called A New Hope (ANH), that address the above issues. The main ideas are (i) computing-intensive smart contract transactions are only executed by nodes who need their results, or by specialized serviced providers, and (ii) a non-deterministic smart contract transaction leads to uncertain results, which can still be validated, though at a relatively high cost; specifically for DNNs, the validation cost can often be reduced by verifying properties of the results instead of their exact values. In addition, we discuss various implications of ANH, including its effects on token fungibility, sharding, private transactions, and the fundamental meaning of a smart contract.



rate research

Read More

Smart contract vulnerability detection draws extensive attention in recent years due to the substantial losses caused by hacker attacks. Existing efforts for contract security analysis heavily rely on rigid rules defined by experts, which are labor-intensive and non-scalable. More importantly, expert-defined rules tend to be error-prone and suffer the inherent risk of being cheated by crafty attackers. Recent researches focus on the symbolic execution and formal analysis of smart contracts for vulnerability detection, yet to achieve a precise and scalable solution. Although several methods have been proposed to detect vulnerabilities in smart contracts, there is still a lack of effort that considers combining expert-defined security patterns with deep neural networks. In this paper, we explore using graph neural networks and expert knowledge for smart contract vulnerability detection. Specifically, we cast the rich control- and data- flow semantics of the source code into a contract graph. To highlight the critical nodes in the graph, we further design a node elimination phase to normalize the graph. Then, we propose a novel temporal message propagation network to extract the graph feature from the normalized graph, and combine the graph feature with designed expert patterns to yield a final detection system. Extensive experiments are conducted on all the smart contracts that have source code in Ethereum and VNT Chain platforms. Empirical results show significant accuracy improvements over the state-of-the-art methods on three types of vulnerabilities, where the detection accuracy of our method reaches 89.15%, 89.02%, and 83.21% for reentrancy, timestamp dependence, and infinite loop vulnerabilities, respectively.
Protecting the privacy of input data is of growing importance as machine learning methods reach new application domains. In this paper, we provide a unified training and inference framework for large DNNs while protecting input privacy and computation integrity. Our approach called DarKnight uses a novel data blinding strategy using matrix masking to create input obfuscation within a trusted execution environment (TEE). Our rigorous mathematical proof demonstrates that our blinding process provides information-theoretic privacy guarantee by bounding information leakage. The obfuscated data can then be offloaded to any GPU for accelerating linear operations on blinded data. The results from linear operations on blinded data are decoded before performing non-linear operations within the TEE. This cooperative execution allows DarKnight to exploit the computational power of GPUs to perform linear operations while exploiting TEEs to protect input privacy. We implement DarKnight on an Intel SGX TEE augmented with a GPU to evaluate its performance.
Smart contract is one of the core features of Ethereum and has inspired many blockchain descendants. Since its advent, the verification paradigm of smart contract has been improving toward high scalability. It shifts from the expensive on-chain verification to the orchestration of off-chain VM (virtual machine) execution and on-chain arbitration with the pinpoint protocol. The representative projects are TrueBit, Arbitrum, YODA, ACE, and Optimism. Inspired by visionaries in academia and industry, we consider the DNN computation to be promising but on the next level of complexity for the verification paradigm of smart contract. Unfortunately, even for the state-of-the-art verification paradigm, off-chain VM execution of DNN computation has an orders-of-magnitude slowdown compared to the native off-chain execution. To enable the native off-chain execution of verifiable DNN computation, we present Agatha system, which solves the significant challenges of misalignment and inconsistency: (1) Native DNN computation has a graph-based computation paradigm misaligned with previous VM-based execution and arbitration; (2) Native DNN computation may be inconsistent cross platforms which invalidates the verification paradigm. In response, we propose the graph-based pinpoint protocol (GPP) which enables the pinpoint protocol on computational graphs, and bridges the native off-chain execution and the contract arbitration. We also develop a technique named Cross-evaluator Consistent Execution (XCE), which guarantees cross-platform consistency and forms the correctness foundation of GPP. We showcase Agatha for the DNN computation of popular models (MobileNet, ResNet50 and VGG16) on Ethereum. Agatha achieves a negligible on-chain overhead, and an off-chain execution overhead of 3.0%, which represents an off-chain latency reduction of at least 602x compared to the state-of-the-art verification paradigm.
Deep neural networks have yielded superior performance in many applications; however, the gradient computation in a deep model with millions of instances lead to a lengthy training process even with modern GPU/TPU hardware acceleration. In this paper, we propose AutoAssist, a simple framework to accelerate training of a deep neural network. Typically, as the training procedure evolves, the amount of improvement in the current model by a stochastic gradient update on each instance varies dynamically. In AutoAssist, we utilize this fact and design a simple instance shrinking operation, which is used to filter out instances with relatively low marginal improvement to the current model; thus the computationally intensive gradient computations are performed on informative instances as much as possible. We prove that the proposed technique outperforms vanilla SGD with existing importance sampling approaches for linear SVM problems, and establish an O(1/k) convergence for strongly convex problems. In order to apply the proposed techniques to accelerate training of deep models, we propose to jointly train a very lightweight Assistant network in addition to the original deep network referred to as Boss. The Assistant network is designed to gauge the importance of a given instance with respect to the current Boss such that a shrinking operation can be applied in the batch generator. With careful design, we train the Boss and Assistant in a nonblocking and asynchronous fashion such that overhead is minimal. We demonstrate that AutoAssist reduces the number of epochs by 40% for training a ResNet to reach the same test accuracy on an image classification data set and saves 30% training time needed for a transformer model to yield the same BLEU scores on a translation dataset.
This paper presents SAILFISH, a scalable system for automatically finding state-inconsistency bugs in smart contracts. To make the analysis tractable, we introduce a hybrid approach that includes (i) a light-weight exploration phase that dramatically reduces the number of instructions to analyze, and (ii) a precise refinement phase based on symbolic evaluation guided by our novel value-summary analysis, which generates extra constraints to over-approximate the side effects of whole-program execution, thereby ensuring the precision of the symbolic evaluation. We developed a prototype of SAILFISH and evaluated its ability to detect two state-inconsistency flaws, viz., reentrancy and transaction order dependence (TOD) in Ethereum smart contracts. Further, we present detection rules for other kinds of smart contract flaws that SAILFISH can be extended to detect. Our experiments demonstrate the efficiency of our hybrid approach as well as the benefit of the value summary analysis. In particular, we show that S SAILFISH outperforms five state-of-the-art smart contract analyzers (SECURITY, MYTHRIL, OYENTE, SEREUM and VANDAL ) in terms of performance, and precision. In total, SAILFISH discovered 47 previously unknown vulnerable smart contracts out of 89,853 smart contracts from ETHERSCAN .

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا