ترغب بنشر مسار تعليمي؟ اضغط هنا

Quality Control, Testing and Deployment Results in NIF ICCS

81   0   0.0 ( 0 )
 نشر من قبل John P. Woodruff
 تاريخ النشر 2001
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The strategy used to develop the NIF Integrated Computer Control System (ICCS) calls for incremental cycles of construction and formal test to deliver a total of 1 million lines of code. Each incremental release takes four to six months to implement specific functionality and culminates when offline tests conducted in the ICCS Integration and Test Facility verify functional, performance, and interface requirements. Tests are then repeated on line to confirm integrated operation in dedicated laser laboratories or ultimately in the NIF. Test incidents along with other change requests are recorded and tracked to closure by the software change control board (SCCB). Annual independent audits advise management on software process improvements. Extensive experience has been gained by integrating controls in the prototype laser preamplifier laboratory. The control system installed in the preamplifier lab contains five of the ten planned supervisory subsystems and seven of sixteen planned front-end processors (FEPs). Beam alignment, timing, diagnosis and laser pulse amplification up to 20 joules was tested through an automated series of shots. Other laboratories have provided integrated testing of six additional FEPs. Process measurements including earned-value, product size, and defect densities provide software project controls and generate confidence that the control system will be successfully deployed.



قيم البحث

اقرأ أيضاً

We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerisation technology that is developing rapidly and being adopted across a range of domains. It is based up on virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects -- a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context -- and include an account of how we solved problems through interaction with Dockers very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.
Continuous Deployment (CD) has emerged as a new practice in the software industry to continuously and automatically deploy software changes into production. Continuous Deployment Pipeline (CDP) supports CD practice by transferring the changes from th e repository to production. Since most of the CDP components run in an environment that has several interfaces to the Internet, these components are vulnerable to various kinds of malicious attacks. This paper reports our work aimed at designing secure CDP by utilizing security tactics. We have demonstrated the effectiveness of five security tactics in designing a secure pipeline by conducting an experiment on two CDPs - one incorporates security tactics while the other does not. Both CDPs have been analyzed qualitatively and quantitatively. We used assurance cases with goal-structured notations for qualitative analysis. For quantitative analysis, we used penetration tools. Our findings indicate that the applied tactics improve the security of the major components (i.e., repository, continuous integration server, main server) of a CDP by controlling access to the components and establishing secure connections.
Recently, many software organizations have been adopting Continuous Delivery and Continuous Deployment (CD) practices to develop and deliver quality software more frequently and reliably. Whilst an increasing amount of the literature covers different aspects of CD, little is known about the role of software architecture in CD and how an application should be (re-) architected to enable and support CD. We have conducted a mixed-methods empirical study that collected data through in-depth, semi-structured interviews with 21 industrial practitioners from 19 organizations, and a survey of 91 professional software practitioners. Based on a systematic and rigorous analysis of the gathered qualitative and quantitative data, we present a conceptual framework to support the process of (re-) architecting for CD. We provide evidence-based insights about practicing CD within monolithic systems and characterize the principle of small and independent deployment units as an alternative to the monoliths. Our framework supplements the architecting process in a CD context through introducing the quality attributes (e.g., resilience) that require more attention and demonstrating the strategies (e.g., prioritizing operations concerns) to design operations-friendly architectures. We discuss the key insights (e.g., monoliths and CD are not intrinsically oxymoronic) gained from our study and draw implications for research and practice.
This paper proposes configuration testing--evaluating configuration values (to be deployed) by exercising the code that uses the values and assessing the corresponding program behavior. We advocate that configuration values should be systematically t ested like software code and that configuration testing should be a key reliability engineering practice for preventing misconfigurations from production deployment. The essential advantage of configuration testing is to put the configuration values (to be deployed) in the context of the target software program under test. In this way, the dynamic effects of configuration values and the impact of configuration changes can be observed during testing. Configuration testing overcomes the fundamental limitations of de facto approaches to combatting misconfigurations, namely configuration validation and software testing--the former is disconnected from code logic and semantics, while the latter can hardly cover all possible configuration values and their combinations. Our preliminary results show the effectiveness of configuration testing in capturing real-world misconfigurations. We present the principles of writing new configuration tests and the promises of retrofitting existing software tests to be configuration tests. We discuss new adequacy and quality metrics for configuration testing. We also explore regression testing techniques to enable incremental configuration testing during continuous integration and deployment in modern software systems.
It is well known that the software process in place impacts the quality of the resulting product. However, the specific way in which this effect occurs is still mostly unknown and reported through anecdotes. To gather a better understanding of such r elationship, a very large survey has been conducted during the last year and has been completed by more than 100 software developers and engineers from 21 countries. We have used the percentage of satisfied customers estimated by the software developers and engineers as the main dependent variable. The results evidence some interesting patterns, like that quality attribute of which customers are more satisfied appears functionality, architectural styles may not have a significant influence on quality, agile methodologies might result in happier customers, larger companies and shorter projects seems to produce better products.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا