Do you want to publish a course? Click here

Automatic Detection and Resolution of Software Merge Conflicts: Are We There Yet?

484   0   0.0 ( 0 )
 Added by Bowen Shen
 Publication date 2021
and research's language is English
 Authors Bowen Shen




Ask ChatGPT about the research

Developers create software branches for tentative feature addition and bug fixing, and periodically merge branches to release software with new features or repairing patches. When the program edits from different branches textually overlap (i.e., textual conflicts), or the co-application of those edits lead to compilation or runtime errors (i.e., compiling or dynamic conflicts), it is challenging and time-consuming for developers to eliminate merge conflicts. Prior studies examined %the popularity of merge conflicts and how conflicts were related to code smells or software development process; tools were built to find and solve conflicts. However, some fundamental research questions are still not comprehensively explored, including (1) how conflicts were introduced, (2) how developers manually resolved conflicts, and (3) what conflicts cannot be handled by current tools. For this paper, we took a hybrid approach that combines automatic detection with manual inspection to reveal 204 merge conflicts and their resolutions in 15 open-source repositories. %in the version history of 15 open-source projects. Our data analysis reveals three phenomena. First, compiling and dynamic conflicts are harder to detect, although current tools mainly focus on textual conflicts. Second, in the same merging context, developers usually resolved similar textual conflicts with similar strategies. Third, developers manually fixed most of the inspected compiling and dynamic conflicts by similarly editing the merged version as what they did for one of the branches. Our research reveals the challenges and opportunities for automatic detection and resolution of merge conflicts; it also sheds light on related areas like systematic program editing and change recommendation.



rate research

Read More

Automated detection of software vulnerabilities is a fundamental problem in software security. Existing program analysis techniques either suffer from high false positives or false negatives. Recent progress in Deep Learning (DL) has resulted in a surge of interest in applying DL for automated vulnerability detection. Several recent studies have demonstrated promising results achieving an accuracy of up to 95% at detecting vulnerabilities. In this paper, we ask, how well do the state-of-the-art DL-based techniques perform in a real-world vulnerability prediction scenario?. To our surprise, we find that their performance drops by more than 50%. A systematic investigation of what causes such precipitous performance drop reveals that existing DL-based vulnerability prediction approaches suffer from challenges with the training data (e.g., data duplication, unrealistic distribution of vulnerable classes, etc.) and with the model choices (e.g., simple token-based models). As a result, these approaches often do not learn features related to the actual cause of the vulnerabilities. Instead, they learn unrelated artifacts from the dataset (e.g., specific variable/function names, etc.). Leveraging these empirical findings, we demonstrate how a more principled approach to data collection and model design, based on realistic settings of vulnerability prediction, can lead to better solutions. The resulting tools perform significantly better than the studied baseline: up to 33.57% boost in precision and 128.38% boost in recall compared to the best performing model in the literature. Overall, this paper elucidates existing DL-based vulnerability prediction systems potential issues and draws a roadmap for future DL-based vulnerability prediction research. In that spirit, we make available all the artifacts supporting our results: https://git.io/Jf6IA.
In Cyberspace nowadays, there is a burst of information that everyone has access. However, apart from the advantages the Internet offers, it also hides numerous dangers for both people and nations. Cyberspace has a dark side, including terrorism, bullying, and other types of violence. Cyberwarfare is a kind of virtual war that causes the same destruction that a physical war would also do. In this article, we discuss what Cyberterrorism is and how it can lead to Cyberwarfare.
Embodied instruction following is a challenging problem requiring an agent to infer a sequence of primitive actions to achieve a goal environment state from complex language and visual inputs. Action Learning From Realistic Environments and Directives (ALFRED) is a recently proposed benchmark for this problem consisting of step-by-step natural language instructions to achieve subgoals which compose to an ultimate high-level goal. Key challenges for this task include localizing target locations and navigating to them through visual inputs, and grounding language instructions to visual appearance of objects. To address these challenges, in this study, we augment the agents field of view during navigation subgoals with multiple viewing angles, and train the agent to predict its relative spatial relation to the target location at each timestep. We also improve language grounding by introducing a pre-trained object detection module to the model pipeline. Empirical studies show that our approach exceeds the baseline model performance.
Decade-long timing observations of arrays of millisecond pulsars have placed highly constraining upper limits on the amplitude of the nanohertz gravitational-wave stochastic signal from the mergers of supermassive black-hole binaries ($sim 10^{-15}$ strain at $f = 1/mathrm{yr}$). These limits suggest that binary merger rates have been overestimated, or that environmental influences from nuclear gas or stars accelerate orbital decay, reducing the gravitational-wave signal at the lowest, most sensitive frequencies. This prompts the question whether nanohertz gravitational waves are likely to be detected in the near future. In this letter, we answer this question quantitatively using simple statistical estimates, deriving the range of true signal amplitudes that are compatible with current upper limits, and computing expected detection probabilities as a function of observation time. We conclude that small arrays consisting of the pulsars with the least timing noise, which yield the tightest upper limits, have discouraging prospects of making a detection in the next two decades. By contrast, we find large arrays are crucial to detection because the quadrupolar spatial correlations induced by gravitational waves can be well sampled by many pulsar pairs. Indeed, timing programs which monitor a large and expanding set of pulsars have an $sim 80%$ probability of detecting gravitational waves within the next ten years, under assumptions on merger rates and environmental influences ranging from optimistic to conservative. Even in the extreme case where $90%$ of binaries stall before merger and environmental coupling effects diminish low-frequency gravitational-wave power, detection is delayed by at most a few years.
Accurate molecular crystal structure prediction is a fundamental goal in academic and industrial condensed matter research and polymorphism is arguably the biggest obstacle on the way. We tackle this challenge in the difficult case of the repeatedly studied, abundantly used aminoacid Glycine that hosts still little-known phase transitions and we illustrate the current state of the field through this example. We demonstrate that the combination of recent progress in structure search algorithms with the latest advances in the description of van der Waals interactions in Density Functional Theory, supported by data-mining analysis, enables a leap in predictive power: we resolve, without prior empirical input, all known phases of glycine, as well as the structure of the previously unresolved $zeta$ phase after a decade of its experimental observation [Boldyreva et al. textit{Z. Kristallogr.} textbf{2005,} textit{220,} 50-57]. The search for the well-established $alpha$ phase instead reveals the remaining challenges in exploring a polymorphic landscape.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا