ترغب بنشر مسار تعليمي؟ اضغط هنا

SBVR vs OCL: A Comparative Analysis of Standards

150   0   0.0 ( 0 )
 نشر من قبل Imran Sarwar Bajwa Dr.
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

In software modelling, the designers have to produce UML visual models with software constraints. Similarly, in business modelling, designers have to model business processes using business constraints (business rules). Constraints are the key components in the skeleton of business or software models. A designer has to write constraints to semantically compliment business models or UML models and finally implementing the constraints into business processes or source code. Business constraints/rules can be written using SBVR (Semantics of Business Vocabulary and Rules) while OCL (Object Constraint Language) is the well-known medium for writing software constraints. SBVR and OCL are two significant standards from OMG. Both standards are principally different as SBVR is typically used in business domains and OCL is employed to compliment software models. However, we have identified a few similarities in both standards that are interesting to study. In this paper, we have performed a comparative analysis of both standards as we are looking for a mechanism for automatic transformation of SBVR to OCL. The major emphasis of the study is to highlight principal features of SBVR and OCL such as similarities, differences and key parameters on which these both standards can work together.



قيم البحث

اقرأ أيضاً

Context: Given the acknowledged need to understand the people processes enacted during software development, software repositories and mailing lists have become a focus for many studies. However, researchers have tended to use mostly mathematical and frequency-based techniques to examine the software artifacts contained within them. Objective: There is growing recognition that these approaches uncover only a partial picture of what happens during software projects, and deeper contextual approaches may provide further understanding of the intricate nature of software teams dynamics. We demonstrate the relevance and utility of such approaches in this study. Method: We use psycholinguistics and directed content analysis (CA) to study the way project tasks drive teams attitudes and knowledge sharing. We compare the outcomes of these two approaches and offer methodological advice for researchers using similar forms of repository data. Results: Our analysis reveals significant differences in the way teams work given their portfolio of tasks and the distribution of roles. Conclusion: We overcome the limitations associated with employing purely quantitative approaches, while avoiding the time-intensive and potentially invasive nature of field work required in full case studies.
Background: Meeting the growing industry demand for Data Science requires cross-disciplinary teams that can translate machine learning research into production-ready code. Software engineering teams value adherence to coding standards as an indicatio n of code readability, maintainability, and developer expertise. However, there are no large-scale empirical studies of coding standards focused specifically on Data Science projects. Aims: This study investigates the extent to which Data Science projects follow code standards. In particular, which standards are followed, which are ignored, and how does this differ to traditional software projects? Method: We compare a corpus of 1048 Open-Source Data Science projects to a reference group of 1099 non-Data Science projects with a similar level of quality and maturity. Results: Data Science projects suffer from a significantly higher rate of functions that use an excessive numbers of parameters and local variables. Data Science projects also follow different variable naming conventions to non-Data Science projects. Conclusions: The differences indicate that Data Science codebases are distinct from traditional software codebases and do not follow traditional software engineering conventions. Our conjecture is that this may be because traditional software engineering conventions are inappropriate in the context of Data Science projects.
Empirical Standards are natural-language models of a scientific communitys expectations for a specific kind of study (e.g. a questionnaire survey). The ACM SIGSOFT Paper and Peer Review Quality Initiative generated empirical standards for research me thods commonly used in software engineering. These living documents, which should be continuously revised to reflect evolving consensus around research best practices, will improve research quality and make peer review more effective, reliable, transparent and fair.
137 - B. Kamala 2019
Process mining is a new emerging research trend over the last decade which focuses on analyzing the processes using event log and data. The raising integration of information systems for the operation of business processes provides the basis for inno vative data analysis approaches. Process mining has the strong relationship between with data mining so that it enables the bond between business intelligence approach and business process management. It focuses on end to end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process mining aims to discover, monitor and improve real processes by extracting knowledge from event logs readily available in todays information systems. The discovered process models can be used for a variety of analysis purposes. Many companies have adopted Process aware Information Systems for supporting their business processes in some form. These systems typically have their log events related to the actual business process executions. Proper analysis of Process Aware Information Systems execution logs can yield important knowledge and help organizations improve the quality of their services. This paper reviews and compares various process mining algorithms based on their input parameters, the techniques used and the output generated by them.
In the last decade, companies adopted DevOps as a fast path to deliver software products according to customer expectations, with well aligned teams and in continuous cycles. As a basic practice, DevOps relies on pipelines that simulate factory swim- lanes. The more automation in the pipeline, the shorter a lead time is supposed to be. However, applying DevOps is challenging, particularly for industrial control systems (ICS) that support critical infrastructures and that must obey to rigorous requirements from security regulations and standards. Current research on security compliant DevOps presents open gaps for this particular domain and in general for systematic application of security standards. In this paper, we present a systematic approach to integrate standard-based security activities into DevOps pipelines and highlight their automation potential. Our intention is to share our experiences and help practitioners to overcome the trade-off between adding security activities into the development process and keeping a short lead time. We conducted an evaluation of our approach at a large industrial company considering the IEC 62443-4-1 security standard that regulates ICS. The results strengthen our confidence in the usefulness of our approach and artefacts, and in that they can support practitioners to achieve security compliance while preserving agility including short lead times.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا