Do you want to publish a course? Click here

HIPPODROME: Data Race Repair using Static Analysis Summaries

122   0   0.0 ( 0 )
 Added by Abhishek Tiwari
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Implementing bug-free concurrent programs is a challenging task in modern software development. State-of-the-art static analyses find hundreds of concurrency bugs in production code, scaling to large codebases. Yet, fixing these bugs in constantly changing codebases represents a daunting effort for programmers, particularly because a fix in the concurrent code can introduce other bugs in a subtle way. In this work, we show how to harness compositional static analysis for concurrency bug detection, to enable a new Automated Program Repair (APR) technique for data races in large concurrent Java codebases. The key innovation of our work is an algorithm that translates procedure summaries inferred by the analysis tool for the purpose of bug reporting, into small local patches that fix concurrency bugs (without introducing new ones). This synergy makes it possible to extend the virtues of compositional static concurrency analysis to APR, making our approach effective (it can detect and fix many more bugs than existing tools for data race repair), scalable (it takes seconds to analyse and suggest fixes for sizeable codebases), and usable (generally, it does not require annotations from the users and can perform continuous automated repair). Our study conducted on popular open-source projects has confirmed that our tool automatically produces concurrency fixes similar to those proposed by the developers in the past.



rate research

Read More

Predictive data race detectors find data races that exist in executions other than the observed execution. Smaragdakis et al. introduced the causally-precedes (CP) relation and a polynomial-time analysis for sound (no false races) predictive data race detection. However, their analysis cannot scale beyond analyzing bounded windows of execution traces. This work introduces a novel dynamic analysis called Raptor that computes CP soundly and completely. Raptor is inherently an online analysis that analyzes and finds all CP-races of an execution trace in its entirety. An evaluation of a prototype implementation of Raptor shows that it scales to program executions that the prior CP analysis cannot handle, finding data races that the prior CP analysis cannot find.
Dynamic programming languages, such as PHP, JavaScript, and Python, provide built-in data structures including associative arrays and objects with similar semantics-object properties can be created at run-time and accessed via arbitrary expressions. While a high level of security and safety of applications written in these languages can be of a particular importance (consider a web application storing sensitive data and providing its functionality worldwide), dynamic data structures pose significant challenges for data-flow analysis making traditional static verification methods both unsound and imprecise. In this paper, we propose a sound and precise approach for value and points-to analysis of programs with associative arrays-like data structures, upon which data-flow analyses can be built. We implemented our approach in a web-application domain-in an analyzer of PHP code.
We propose a new static approach to Role-Based Access Control (RBAC) policy enforcement. The static approach we advocate includes a new design methodology, for applications involving RBAC, which integrates the security requirements into the systems architecture. We apply this new approach to policies restricting calls to methods in Java applications. We present a language to express RBAC policies on calls to methods in Java, a set of design patterns which Java programs must adhere to for the policy to be enforced statically, and a description of the checks made by our static verifier for static enforcement.
399 - Yun Peng , Zongjie Li , Cuiyun Gao 2021
Type inference for dynamic programming languages is an important yet challenging task. By leveraging the natural language information of existing human annotations, deep neural networks outperform other traditional techniques and become the state-of-the-art (SOTA) in this task. However, they are facing some new challenges, such as fixed type set, type drift, type correctness, and composite type prediction. To mitigate the challenges, in this paper, we propose a hybrid type inference framework named HiTyper, which integrates static inference into deep learning (DL) models for more accurate type prediction. Specifically, HiTyper creates a new syntax graph for each program, called type graph, illustrating the type flow among all variables in the program. Based on the type graph, HiTyper statically infers the types of the variables with appropriate static constraints. HiTyper then adopts a SOTA DL model to predict the types of other variables that cannot be inferred statically, during which process a type correction algorithm is employed to validate and correct the types recommended by the DL model. Extensive experiments show that HiTyper outperforms the SOTA DL approach by 12.7% in terms of top-1 F1-score. Moreover, HiTyper filters out 50.6% of incorrect candidate types recommended by the SOTA DL model, indicating that HiTyper could improve the correctness of predicted types. Case studies also demonstrate the capability of HiTyper in alleviating the fixed type set issue, and in handling type drift and complicated types such as composite data types.
We propose a method, based on program analysis and transformation, for eliminating timing side channels in software code that implements security-critical applications. Our method takes as input the original program together with a list of secret variables (e.g., cryptographic keys, security tokens, or passwords) and returns the transformed program as output. The transformed program is guaranteed to be functionally equivalent to the original program and free of both instruction- and cache-timing side channels. Specifically, we ensure that the number of CPU cycles taken to execute any path is independent of the secret data, and the cache behavior of memory accesses, in terms of hits and misses, is independent of the secret data. We have implemented our method in LLVM and validated its effectiveness on a large set of applications, which are cryptographic libraries with 19,708 lines of C/C++ code in total. Our experiments show the method is both scalable for real applications and effective in eliminating timing side channels.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا