ترغب بنشر مسار تعليمي؟ اضغط هنا

A use case driven approach for system level testing

111   0   0.0 ( 0 )
 نشر من قبل Zahid Hussain Qaisar
 تاريخ النشر 2012
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Use case scenarios are created during the analysis phase to specify software system requirements and can also be used for creating system level test cases. Using use cases to get system tests has several benefits including test design at early stages of software development life cycle that reduces over all development cost of the system. Current approaches for system testing using use cases involve functional details and does not include guards as passing criteria i.e. use of class diagram that seem to be difficult at very initial level which lead the need of specification based testing without involving functional details. In this paper, we proposed a technique for system testing directly derived from the specification without involving functional details. We utilize initial and post conditions applied as guards at each level of the use cases that enables us generation of formalized test cases and makes it possible to generate test cases for each flow of the system. We used use case scenarios to generate system level test cases, whereas system sequence diagram is being used to bridge the gap between the test objective and test cases, derived from the specification of the system. Since, a state chart derived from the combination of sequence diagrams can model the entire behavior of the system.Generated test cases can be employed and executed to state chart in order to capture behavior of the system with the state change.All these steps enable us to systematically refine the specification to achieve the goals of system testing at early development stages.

قيم البحث

اقرأ أيضاً

We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerisation technology that is developing rapidly and being adopted across a range of domains. It is based up on virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects -- a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context -- and include an account of how we solved problems through interaction with Dockers very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.
Player experience (PX) evaluation has become a field of interest in the game industry. Several manual PX techniques have been introduced to assist developers to understand and evaluate the experience of players in computer games. However, automated t esting of player experience still needs to be addressed. An automated player experience testing framework would allow designers to evaluate the PX requirements in the early development stages without the necessity of participating human players. In this paper, we propose an automated player experience testing approach by suggesting a formal model of event-based emotions. In particular, we discuss an event-based transition system to formalize relevant emotions using Ortony, Clore, & Collins (OCC) theory of emotions. A working prototype of the model is integrated on top of Aplib, a tactical agent programming library, to create intelligent PX test agents, capable of appraising emotions in a 3D game case study. The results are graphically shown e.g. as heat maps. Emotion visualization of the test agent would ultimately help game designers in creating content that evokes a certain experience in players.
Machine translation has wide applications in daily life. In mission-critical applications such as translating official documents, incorrect translation can have unpleasant or sometimes catastrophic consequences. This motivates recent research on test ing methodologies for machine translation systems. Existing methodologies mostly rely on metamorphic relations designed at the textual level (e.g., Levenshtein distance) or syntactic level (e.g., the distance between grammar structures) to determine the correctness of translation results. However, these metamorphic relations do not consider whether the original and translated sentences have the same meaning (i.e., Semantic similarity). Therefore, in this paper, we propose SemMT, an automatic testing approach for machine translation systems based on semantic similarity checking. SemMT applies round-trip translation and measures the semantic similarity between the original and translated sentences. Our insight is that the semantics expressed by the logic and numeric constraint in sentences can be captured using regular expressions (or deterministic finite automata) where efficient equivalence/similarity checking algorithms are available. Leveraging the insight, we propose three semantic similarity metrics and implement them in SemMT. The experiment result reveals SemMT can achieve higher effectiveness compared with state-of-the-art works, achieving an increase of 21% and 23% on accuracy and F-Score, respectively. We also explore potential improvements that can be achieved when proper combinations of metrics are adopted. Finally, we discuss a solution to locate the suspicious trip in round-trip translation, which may shed lights on further exploration.
Software process improvement (SPI) is a means to an end, not an end in itself (e.g., a goal is to achieve shorter time to market and not just compliance to a process standard). Therefore, SPI initiatives ought to be streamlined to meet the desired va lues for an organization. Through a literature review, seven secondary studies aggregating maturity models and assessment frameworks were identified. Furthermore, we identified six proposals for building a new maturity model. We analyzed the existing maturity models for (a) their purpose, structure, guidelines, and (b) the degree to which they explicitly consider values and benefits. Based on this analysis and utilizing the guidelines from the proposals to build maturity models, we have introduced an approach for developing a value-driven approach for SPI. The proposal leveraged the benefits-dependency networks. We argue that our approach enables the following key benefits: (a) as a value-driven approach, it streamlines value-delivery and helps to avoid unnecessary process interventions, (b) as a knowledge-repository, it helps to codify lessons learned i.e. whether adopted practices lead to value realization, and (c) as an internal process maturity assessment tool, it tracks the progress of process realization, which is necessary to monitor progress towards the intended values.
71 - Yaohui Chen , Peng Li , Jun Xu 2019
Hybrid testing combines fuzz testing and concolic execution. It leverages fuzz testing to test easy-to-reach code regions and uses concolic execution to explore code blocks guarded by complex branch conditions. However, its code coverage-centric desi gn is inefficient in vulnerability detection. First, it blindly selects seeds for concolic execution and aims to explore new code continuously. However, as statistics show, a large portion of the explored code is often bug-free. Therefore, giving equal attention to every part of the code during hybrid testing is a non-optimal strategy. It slows down the detection of real vulnerabilities by over 43%. Second, classic hybrid testing quickly moves on after reaching a chunk of code, rather than examining the hidden defects inside. It may frequently miss subtle vulnerabilities despite that it has already explored the vulnerable code paths. We propose SAVIOR, a new hybrid testing framework pioneering a bug-driven principle. Unlike the existing hybrid testing tools, SAVIOR prioritizes the concolic execution of the seeds that are likely to uncover more vulnerabilities. Moreover, SAVIOR verifies all vulnerable program locations along the executing program path. By modeling faulty situations using SMT constraints, SAVIOR reasons the feasibility of vulnerabilities and generates concrete test cases as proofs. Our evaluation shows that the bug-driven approach outperforms mainstream automated testing techniques, including state-of-the-art hybrid testing systems driven by code coverage. On average, SAVIOR detects vulnerabilities 43.4% faster than DRILLER and 44.3% faster than QSYM, leading to the discovery of 88 and 76 more uniquebugs,respectively.Accordingtotheevaluationon11 well fuzzed benchmark programs, within the first 24 hours, SAVIOR triggers 481 UBSAN violations, among which 243 are real bugs.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا