ترغب بنشر مسار تعليمي؟ اضغط هنا

Adabot: Fault-Tolerant Java Decompiler

63   0   0.0 ( 0 )
 نشر من قبل Zhiming Li
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Reverse Engineering(RE) has been a fundamental task in software engineering. However, most of the traditional Java reverse engineering tools are strictly rule defined, thus are not fault-tolerant, which pose serious problem when noise and interference were introduced into the system. In this paper, we view reverse engineering as a statistical machine translation task instead of rule-based task, and propose a fault-tolerant Java decompiler based on machine translation models. Our model is based on attention-based Neural Machine Translation (NMT) and Transformer architectures. First, we measure the translation quality on both the redundant and purified datasets. Next, we evaluate the fault-tolerance(anti-noise ability) of our framework on test sets with different unit error probability (UEP). In addition, we compare the suitability of different word segmentation algorithms for decompilation task. Experimental results demonstrate that our model is more robust and fault-tolerant compared to traditional Abstract Syntax Tree (AST) based decompilers. Specifically, in terms of BLEU-4 and Word Error Rate (WER), our performance has reached 94.50% and 2.65% on the redundant test set; 92.30% and 3.48% on the purified test set.



قيم البحث

اقرأ أيضاً

In this work, we initiate the study of fault tolerant Max Cut, where given an edge-weighted undirected graph $G=(V,E)$, the goal is to find a cut $Ssubseteq V$ that maximizes the total weight of edges that cross $S$ even after an adversary removes $k $ vertices from $G$. We consider two types of adversaries: an adaptive adversary that sees the outcome of the random coin tosses used by the algorithm, and an oblivious adversary that does not. For any constant number of failures $k$ we present an approximation of $(0.878-epsilon)$ against an adaptive adversary and of $alpha_{GW}approx 0.8786$ against an oblivious adversary (here $alpha_{GW}$ is the approximation achieved by the random hyperplane algorithm of [Goemans-Williamson J. ACM `95]). Additionally, we present a hardness of approximation of $alpha_{GW}$ against both types of adversaries, rendering our results (virtually) tight. The non-linear nature of the fault tolerant objective makes the design and analysis of algorithms harder when compared to the classic Max Cut. Hence, we employ approaches ranging from multi-objective optimization to LP duality and the ellipsoid algorithm to obtain our results.
A $k$-spanner of a graph $G$ is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of $k$, and a $k$-emulator is similar but not required to be a subgraph of $G$. A classic theorem by Thorup and Zwick [ JACM 05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners. We complement our emulator upper bound with a lower bound construction that is essentially tight (within $log n$ factors of the upper bound) when the stretch is $2k-1$ and $k$ is either a fixed odd integer or $2$. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.
It is now widely accepted that the CMOS technology implementing irreversible logic will hit a scaling limit beyond 2016, and that the increased power dissipation is a major limiting factor. Reversible computing can potentially require arbitrarily sma ll amounts of energy. Recently several nano-scale devices which have the potential to scale, and which naturally perform reversible logic, have emerged. This paper addresses several fundamental issues that need to be addressed before any nano-scale reversible computing systems can be realized, including reliability and performance trade-offs and architecture optimization. Many nano-scale devices will be limited to only near neighbor interactions, requiring careful optimization of circuits. We provide efficient fault-tolerant (FT) circuits when restricted to both 2D and 1D. Finally, we compute bounds on the entropy (and hence, heat) generated by our FT circuits and provide quantitative estimates on how large can we make our circuits before we lose any advantage over irreversible computing.
We explain how to combine holonomic quantum computation (HQC) with fault tolerant quantum error correction. This establishes the scalability of HQC, putting it on equal footing with other models of computation, while retaining the inherent robustness the method derives from its geometric nature.
FP-Growth algorithm is a Frequent Pattern Min- ing (FPM) algorithm that has been extensively used to study correlations and patterns in large scale datasets. While several researchers have designed distributed memory FP-Growth algorithms, it is pivot al to consider fault tolerant FP-Growth, which can address the increasing fault rates in large scale systems. In this work, we propose a novel parallel, algorithm-level fault-tolerant FP-Growth algorithm. We leverage algorithmic properties and MPI advanced features to guarantee an O(1) space complexity, achieved by using the dataset memory space itself for checkpointing. We also propose a recovery algorithm that can use in-memory and disk-based checkpointing, though in many cases the recovery can be completed without any disk access, and incurring no memory overhead for checkpointing. We evaluate our FT algorithm on a large scale InfiniBand cluster with several large datasets using up to 2K cores. Our evaluation demonstrates excellent efficiency for checkpointing and recovery in comparison to the disk-based approach. We have also observed 20x average speed-up in comparison to Spark, establishing that a well designed algorithm can easily outperform a solution based on a general fault-tolerant programming model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا