ترغب بنشر مسار تعليمي؟ اضغط هنا

The Shut the f**k up Phenomenon: Characterizing Incivility in Open Source Code Review Discussions

68   0   0.0 ( 0 )
 نشر من قبل Isabella Ferreira
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Code review is an important quality assurance activity for software development. Code review discussions among developers and maintainers can be heated and sometimes involve personal attacks and unnecessary disrespectful comments, demonstrating, therefore, incivility. Although incivility in public discussions has received increasing attention from researchers in different domains, the knowledge about the characteristics, causes, and consequences of uncivil communication is still very limited in the context of software development, and more specifically, code review. To address this gap in the literature, we leverage the mature social construct of incivility as a lens to understand confrontational conflicts in open source code review discussions. For that, we conducted a qualitative analysis on 1,545 emails from the Linux Kernel Mailing List (LKML) that were associated with rejected changes. We found that more than half 66.66% of the non-technical emails included uncivil features. Particularly, frustration, name calling, and impatience are the most frequent features in uncivil emails. We also found that there are civil alternatives to address arguments, while uncivil comments can potentially be made by any people when discussing any topic. Finally, we identified various causes and consequences of such uncivil communication. Our work serves as the first study about the phenomenon of in(civility) in open source software development, paving the road for a new field of research about collaboration and communication in the context of software engineering activities.


قيم البحث

اقرأ أيضاً

Eye tracking tools are used in software engineering research to study various software development activities. However, a major limitation of these tools is their inability to track gaze data for activities that involve source code editing. We presen t a novel solution to support eye tracking experiments for tasks involving source code edits as an extension of the iTrace community infrastructure. We introduce the iTrace-Atom plugin and gazel -- a Python data processing pipeline that maps gaze information to changing source code elements and provides researchers with a way to query this dynamic data. iTrace-Atom is evaluated via a series of simulations and is over 99% accurate at high eye-tracking speeds of over 1,000Hz. iTrace and gazel completely revolutionize the way eye tracking studies are conducted in realistic settings with the presence of scrolling, context switching, and now editing. This opens the doors to support many day-to-day software engineering tasks such as bug fixing, adding new features, and refactoring.
The development of scientific software is, more than ever, critical to the practice of science, and this is accompanied by a trend towards more open and collaborative efforts. Unfortunately, there has been little investigation into who is driving the evolution of such scientific software or how the collaboration happens. In this paper, we address this problem. We present an extensive analysis of seven open-source scientific software projects in order to develop an empirically-informed model of the development process. This analysis was complemented by a survey of 72 scientific software developers. In the majority of the projects, we found senior research staff (e.g. professors) to be responsible for half or more of commits (an average commit share of 72%) and heavily involved in architectural concerns (seniors were more likely to interact with files related to the build system, project meta-data, and developer documentation). Juniors (e.g.graduate students) also contribute substantially -- in one studied project, juniors made almost 100% of its commits. Still, graduate students had the longest contribution periods among juniors (with 1.72 years of commit activity compared to 0.98 years for postdocs and 4 months for undergraduates). Moreover, we also found that third-party contributors are scarce, contributing for just one day for the project. The results from this study aim to help scientists to better understand their own projects, communities, and the contributors behavior, while paving the road for future software engineering research
Usability is an increasing concern in open source software (OSS). Given the recent changes in the OSS landscape, it is imperative to examine the OSS contributors current valued factors, practices, and challenges concerning usability. We accumulated t his knowledge through a survey with a wide range of contributors to OSS applications. Through analyzing 84 survey responses, we found that many participants recognized the importance of usability. While most relied on issue tracking systems to collect user feedback, a few participants also adopted typical user-centered design methods. However, most participants demonstrated a system-centric rather than a user-centric view. Understanding the diverse needs and consolidating various feedback of end-users posed unique challenges for the OSS contributors when addressing usability in the most recent development context. Our work provided important insights for OSS practitioners and tool designers in exploring ways for promoting a user-centric mindset and improving usability practice in the current OSS communities.
In Software Engineering, some of the most critical activities are maintenance and evolution. However, to perform both with quality, minimizing impacts and risks, developers need to analyze and identify where the main problems come from previously. In this paper, we introduce DR-Tools Suite, a set of lightweight open-source tools that analyze and calculate source code metrics, allowing developers to visualize the results in different formats and graphs. Also, we define a set of heuristics to help the code analysis. We conducted two case studies (one academic and one industrial) to collect feedback on the tools suite, on how we will evolve the tools, as well as insights to develop new tools that support developers in their daily work.
Code reviews are popular in both industrial and open source projects. The benefits of code reviews are widely recognized and include better code quality and lower likelihood of introducing bugs. However, since code review is a manual activity it come s at the cost of spending developers time on reviewing their teammates code. Our goal is to make the first step towards partially automating the code review process, thus, possibly reducing the manual costs associated with it. We focus on both the contributor and the reviewer sides of the process, by training two different Deep Learning architectures. The first one learns code changes performed by developers during real code review activities, thus providing the contributor with a revised version of her code implementing code transformations usually recommended during code review before the code is even submitted for review. The second one automatically provides the reviewer commenting on a submitted code with the revised code implementing her comments expressed in natural language. The empirical evaluation of the two models shows that, on the contributor side, the trained model succeeds in replicating the code transformations applied during code reviews in up to 16% of cases. On the reviewer side, the model can correctly implement a comment provided in natural language in up to 31% of cases. While these results are encouraging, more research is needed to make these models usable by developers.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا