ترغب بنشر مسار تعليمي؟ اضغط هنا

A Hybrid Approach to Fine-grained Automated Fault Localization

100   0   0.0 ( 0 )
 نشر من قبل Hui Liu
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Fault localization is to identify faulty source code. It could be done on various granularities, e.g., classes, methods, and statements. Most of the automated fault localization (AFL) approaches are coarse-grained because it is challenging to accurately locate fine-grained faulty software elements, e.g., statements. SBFL, based on dynamic execution of test cases only, is simple, intuitive, and generic (working on various granularities). However, its accuracy deserves significant improvement. To this end, in this paper, we propose a hybrid fine-grained AFL approach based on both dynamic spectrums and static statement types. The rationale of the approach is that some types of statements are significantly more/less error-prone than others, and thus statement types could be exploited for fault localization. On a crop of faulty programs, we compute the error-proneness for each type of statements, and assign priorities to special statement types that are steadily more/less error-prone than others. For a given faulty program under test, we first leverage traditional spectrum-based fault localization algorithm to identify all suspicious statements and to compute their suspicious scores. For each of the resulting suspicious statements, we retrieve its statement type as well as the special priority associated with the type. The final suspicious score is the product of the SBFL suspicious score and the priority assigned to the statement type. A significant advantage of the approach is that it is simple and intuitive, making it efficient and easy to interpret/implement. We evaluate the proposed approach on widely used benchmark Defects4J. The evaluation results suggest that the proposed approach outperforms widely used SBFL, reducing the absolute waste effort (AWE) by 9.3% on average.



قيم البحث

اقرأ أيضاً

Computational notebooks have emerged as the platform of choice for data science and analytical workflows, enabling rapid iteration and exploration. By keeping intermediate program state in memory and segmenting units of execution into so-called cells , notebooks allow users to execute their workflows interactively and enjoy particularly tight feedback. However, as cells are added, removed, reordered, and rerun, this hidden intermediate state accumulates in a way that is not necessarily correlated with the notebooks visible code, making execution behavior difficult to reason about, and leading to errors and lack of reproducibility. We present NBSafety, a custom Jupyter kernel that uses runtime tracing and static analysis to automatically manage lineage associated with cell execution and global notebook state. NBSafety detects and prevents errors that users make during unaided notebook interactions, all while preserving the flexibility of existing notebook semantics. We evaluate NBSafetys ability to prevent erroneous interactions by replaying and analyzing 666 real notebook sessions. Of these, NBSafety identified 117 sessions with potential safety errors, and in the remaining 549 sessions, the cells that NBSafety identified as resolving safety issues were more than $7times$ more likely to be selected by users for re-execution compared to a random baseline, even though the users were not using NBSafety and were therefore not influenced by its suggestions.
Modern software development is increasingly dependent on components, libraries and frameworks coming from third-party vendors or open-source suppliers and made available through a number of platforms (or forges). This way of writing software puts an emphasis on reuse and on composition, commoditizing the services which modern applications require. On the other hand, bugs and vulnerabilities in a single library living in one such ecosystem can affect, directly or by transitivity, a huge number of other libraries and applications. Currently, only product-level information on library dependencies is used to contain this kind of danger, but this knowledge often reveals itself too imprecise to lead to effective (and possibly automated) handling policies. We will discuss how fine-grained function-level dependencies can greatly improve reliability and reduce the impact of vulnerabilities on the whole software ecosystem.
We present a layered hybrid-system approach to quantum communication that involves the distribution of a topological cluster state throughout a quantum network. Photon loss and other errors are suppressed by optical multiplexing and entanglement puri fication. The scheme is scalable to large distances, achieving an end-to-end rate of 1 kHz with around 50 qubits per node. We suggest a potentially suitable implementation of an individual node composed of erbium spins (single atom or ensemble) coupled via flux qubits to a microwave resonator, allowing for deterministic local gates, stable quantum memories, and emission of photons in the telecom regime.
121 - Yi Liu , Limin Wang , Xiao Ma 2021
Temporal action localization (TAL) is an important and challenging problem in video understanding. However, most existing TAL benchmarks are built upon the coarse granularity of action classes, which exhibits two major limitations in this task. First , coarse-level actions can make the localization models overfit in high-level context information, and ignore the atomic action details in the video. Second, the coarse action classes often lead to the ambiguous annotations of temporal boundaries, which are inappropriate for temporal action localization. To tackle these problems, we develop a novel large-scale and fine-grained video dataset, coined as FineAction, for temporal action localization. In total, FineAction contains 103K temporal instances of 106 action categories, annotated in 17K untrimmed videos. FineAction introduces new opportunities and challenges for temporal action localization, thanks to its distinct characteristics of fine action classes with rich diversity, dense annotations of multiple instances, and co-occurring actions of different classes. To benchmark FineAction, we systematically investigate the performance of several popular temporal localization methods on it, and deeply analyze the influence of short-duration and fine-grained instances in temporal action localization. We believe that FineAction can advance research of temporal action localization and beyond.
Although recent advances in deep learning accelerated an improvement in a weakly supervised object localization (WSOL) task, there are still challenges to identify the entire body of an object, rather than only discriminative parts. In this paper, we propose a novel residual fine-grained attention (RFGA) module that autonomously excites the less activated regions of an object by utilizing information distributed over channels and locations within feature maps in combination with a residual operation. To be specific, we devise a series of mechanisms of triple-view attention representation, attention expansion, and feature calibration. Unlike other attention-based WSOL methods that learn a coarse attention map, having the same values across elements in feature maps, our proposed RFGA learns fine-grained values in an attention map by assigning different attention values for each of the elements. We validated the superiority of our proposed RFGA module by comparing it with the recent methods in the literature over three datasets. Further, we analyzed the effect of each mechanism in our RFGA and visualized attention maps to get insights.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا