ترغب بنشر مسار تعليمي؟ اضغط هنا

Communication channels in safety analysis: An industrial exploratory case study

106   0   0.0 ( 0 )
 نشر من قبل Daniel Graziotin
 تاريخ النشر 2018
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Context: Safety analysis is a predominant activity in developing safety-critical systems. It is a highly cooperative task among multiple functional departments due to increasingly sophisticated safety-critical systems and close-knit development processes. Communication occurs pervasively. Motivation: Effective communication channels among multiple functional departments influence safety analysis, quality as well as a safe product delivery. However, the use of communication channels during safety analysis is sometimes arbitrary and poses challenges. Objective: Investige the existing communication channels, their usage frequencies, their purposes and challenges during safety analysis in industry.. Method: Multiple case study of experts (survey: 39, interview: 21) in safety-critical companies including software developers, quality engineers and functional safety managers. Direct observations and documentation review were also conducted. Results: Popular communication channels during safety analysis include formal meetings, project coordination tools, documentation and telephone. Email, personal discussion, training, internal communication software and boards are also in use. Training involving safety analysis happens 1-4 times per year, while other aforementioned communication channels happen ranges from 1-4 times per day to 1-4 times per month. We summarise 28 purposes for these communication channels. Communication happens mostly for the purpose of clarifying safety requirements, fixing temporary problems, conflicts and obstacles and sharing safety knowledge. The top challenges are reported. Conclusion: During safety analysis, to use communication channels effectively and avoid challenges, a clear purpose of communication during safety analysis should be established at the beginning. To derive countermeasures of fixing the top 10 challenges are potential next steps.

قيم البحث

اقرأ أيضاً

477 - Maik Betka , Stefan Wagner 2021
Mutation testing is used to evaluate the effectiveness of test suites. In recent years, a promising variation called extreme mutation testing emerged that is computationally less expensive. It identifies methods where their functionality can be entir ely removed, and the test suite would not notice it, despite having coverage. These methods are called pseudo-tested. In this paper, we compare the execution and analysis times for traditional and extreme mutation testing and discuss what they mean in practice. We look at how extreme mutation testing impacts current software development practices and discuss open challenges that need to be addressed to foster industry adoption. For that, we conducted an industrial case study consisting of running traditional and extreme mutation testing in a large software project from the semiconductor industry that is covered by a test suite of more than 11,000 unit tests. In addition to that, we did a qualitative analysis of 25 pseudo-tested methods and interviewed two experienced developers to see how they write unit tests and gathered opinions on how useful the findings of extreme mutation testing are. Our results include execution times, scores, numbers of executed tests and mutators, reasons why methods are pseudo-tested, and an interview summary. We conclude that the shorter execution and analysis times are well noticeable in practice and show that extreme mutation testing supplements writing unit tests in conjunction with code coverage tools. We propose that pseudo-tested code should be highlighted in code coverage reports and that extreme mutation testing should be performed when writing unit tests rather than in a decoupled session. Future research should investigate how to perform extreme mutation testing while writing unit tests such that the results are available fast enough but still meaningful.
Context: Visual GUI testing (VGT) is referred to as the latest generation GUI-based testing. It is a tool-driven technique, which uses image recognition for interacting with and asserting the behavior of the system under test. Motivated by the indust rial need of a large Turkish software and systems company providing solutions in the areas of defense and IT sector, an action-research project was recently initiated to implement VGT in several teams and projects in the company. Objective: To address the above needs, we planned and carried out an empirical investigation with the goal of assessing VGT using two tools (Sikuli and JAutomate). The purpose was to determine a suitable approach and tool for VGT of a given project (software product) in the company, increase the know-how in the companys test teams. Method: Using an action-research case-study design, we investigated the use of VGT in the studied organization. Specifically, using the two selected VGT tools, we conducted a quantitative and a qualitative evaluation of VGT. Results: By assessing the list of Challenges, Problems and Limitations (CPL), proposed in previous work, in the context of our empirical study, we found that test-tool- and SUT-related CPLs were quite comparable to a previous empirical study, e.g., the synchronization between SUT and test tools were not always robust and there were failures in test tools image recognition features. When assessing the types of test maintenance activities, when executing the automated test cases on ne
Software Product Lines (SPLs) are families of related software products developed from a common set of artifacts. Most existing analysis tools can be applied to a single product at a time, but not to an entire SPL. Some tools have been redesigned/re- implemented to support the kind of variability exhibited in SPLs, but this usually takes a lot of effort, and is error-prone. Declarative analyses written in languages like Datalog have been collectively lifted to SPLs in prior work, which makes the process of applying an existing declarative analysis to a product line more straightforward. In this paper, we take an existing declarative analysis (behaviour alteration) written in the Grok declarative language, port it to Datalog, and apply it to a set of automotive software product lines from General Motors. We discuss the design of the analysis pipeline used in this process, present its scalability results, and provide a means to visualize the analysis results for a subset of products filtered by feature expression. We also reflect on some of the lessons learned throughout this project.
Discussions is a new feature of GitHub for asking questions or discussing topics outside of specific Issues or Pull Requests. Before being available to all projects in December 2020, it had been tested on selected open source software projects. To un derstand how developers use this novel feature, how they perceive it, and how it impacts the development processes, we conducted a mixed-methods study based on early adopters of GitHub discussions from January until July 2020. We found that: (1) errors, unexpected behavior, and code reviews are prevalent discussion categories; (2) there is a positive relationship between project member involvement and discussion frequency; (3) developers consider GitHub Discussions useful but face the problem of topic duplication between Discussions and Issues; (4) Discussions play a crucial role in advancing the development of projects; and (5) positive sentiment in Discussions is more frequent than in Stack Overflow posts. Our findings are a first step towards data-informed guidance for using GitHub Discussions, opening up avenues for future work on this novel communication channel.
In the domain of software engineering, our efforts as researchers to advise industry on which software practices might be applied most effectively are limited by our lack of evidence based information about the relationships between context and pract ice efficacy. In order to accumulate such evidence, a model for context is required. We are in the exploratory stage of evolving a model for context for situated software practices. In this paper, we overview the evolution of our proposed model. Our analysis has exposed a lack of clarity in the meanings of terms reported in the literature. Our base model dimensions are People, Place, Product and Process. Our contributions are a deepening of our understanding of how to scope contextual factors when considering software initiatives and the proposal of an initial theoretical construct for context. Study limitations relate to a possible subjectivity in the analysis and a restricted evaluation base. In the next stage in the research, we will collaborate with academics and practitioners to formally refine the model.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا