Do you want to publish a course? Click here

SmartTrack: Efficient Predictive Race Detection

55   0   0.0 ( 0 )
 Added by Jake Roemer
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Widely used data race detectors, including the state-of-the-art FastTrack algorithm, incur performance costs that are acceptable for regular in-house testing, but miss races detectable from the analyzed execution. Predictive analyses detect more data races in an analyzed execution than FastTrack detects, but at significantly higher performance cost. This paper presents SmartTrack, an algorithm that optimizes predictive race detection analyses, including two analyses from prior work and a new analysis introduced in this paper. SmartTracks algorithm incorporates two main optimizations: (1) epoch and ownership optimizations from prior work, applied to predictive analysis for the first time; and (2) novel conflicting critical section optimizations introduced by this paper. Our evaluation shows that SmartTrack achieves performance competitive with FastTrack-a qualitative improvement in the state of the art for data race detection.

rate research

Read More

Predictive data race detectors find data races that exist in executions other than the observed execution. Smaragdakis et al. introduced the causally-precedes (CP) relation and a polynomial-time analysis for sound (no false races) predictive data race detection. However, their analysis cannot scale beyond analyzing bounded windows of execution traces. This work introduces a novel dynamic analysis called Raptor that computes CP soundly and completely. Raptor is inherently an online analysis that analyzes and finds all CP-races of an execution trace in its entirety. An evaluation of a prototype implementation of Raptor shows that it scales to program executions that the prior CP analysis cannot handle, finding data races that the prior CP analysis cannot find.
Implementing bug-free concurrent programs is a challenging task in modern software development. State-of-the-art static analyses find hundreds of concurrency bugs in production code, scaling to large codebases. Yet, fixing these bugs in constantly changing codebases represents a daunting effort for programmers, particularly because a fix in the concurrent code can introduce other bugs in a subtle way. In this work, we show how to harness compositional static analysis for concurrency bug detection, to enable a new Automated Program Repair (APR) technique for data races in large concurrent Java codebases. The key innovation of our work is an algorithm that translates procedure summaries inferred by the analysis tool for the purpose of bug reporting, into small local patches that fix concurrency bugs (without introducing new ones). This synergy makes it possible to extend the virtues of compositional static concurrency analysis to APR, making our approach effective (it can detect and fix many more bugs than existing tools for data race repair), scalable (it takes seconds to analyse and suggest fixes for sizeable codebases), and usable (generally, it does not require annotations from the users and can perform continuous automated repair). Our study conducted on popular open-source projects has confirmed that our tool automatically produces concurrency fixes similar to those proposed by the developers in the past.
In this paper we present a Learning Model Predictive Controller (LMPC) for autonomous racing. We model the autonomous racing problem as a minimum time iterative control task, where an iteration corresponds to a lap. In the proposed approach at each lap the race time does not increase compared to the previous lap. The system trajectory and input sequence of each lap are stored and used to systematically update the controller for the next lap. The first contribution of the paper is to propose a LMPC strategy which reduces the computational burden associated with existing LMPC strategies. In particular, we show how to construct a safe set and an approximation to the value function, using a subset of the stored data. The second contribution is to present a system identification strategy for the autonomous racing iterative control task. We use data from previous iterations and the vehicles kinematics equations to build an affine time-varying prediction model. The effectiveness of the proposed strategy is demonstrated by experimental results on the Berkeley Autonomous Race Car (BARC) platform.
Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defenders feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attackers freedom in conducting manipulations in the problem space; knowing the attackers manipulation set is critical to the defenders success; the effectiveness of adversarial training depends on the defenders capability in identifying the most powerful attack. We also discuss a number of future research directions.
Mutation analysis can provide valuable insights into both System Under Test (SUT) and its test suite. However, it is not scalable due to the cost of building and testing a large number of mutants. Predictive Mutation Testing (PMT) has been proposed to reduce the cost of mutation testing, but it can only provide statistical inference about whether a mutant will be killed or not by the entire test suite. We propose Seshat, a Predictive Mutation Analysis (PMA) technique that can accurately predict the entire kill matrix, not just the mutation score of the given test suite. Seshat exploits the natural language channel in code, and learns the relationship between the syntactic and semantic concepts of each test case and the mutants it can kill, from a given kill matrix. The learnt model can later be used to predict the kill matrices for subseque
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا