Do you want to publish a course? Click here

BAGH -- Comparative study

138   0   0.0 ( 0 )
 Publication date 2019
and research's language is English
 Authors B. Kamala




Ask ChatGPT about the research

Process mining is a new emerging research trend over the last decade which focuses on analyzing the processes using event log and data. The raising integration of information systems for the operation of business processes provides the basis for innovative data analysis approaches. Process mining has the strong relationship between with data mining so that it enables the bond between business intelligence approach and business process management. It focuses on end to end processes and is possible because of the growing availability of event data and new process discovery and conformance checking techniques. Process mining aims to discover, monitor and improve real processes by extracting knowledge from event logs readily available in todays information systems. The discovered process models can be used for a variety of analysis purposes. Many companies have adopted Process aware Information Systems for supporting their business processes in some form. These systems typically have their log events related to the actual business process executions. Proper analysis of Process Aware Information Systems execution logs can yield important knowledge and help organizations improve the quality of their services. This paper reviews and compares various process mining algorithms based on their input parameters, the techniques used and the output generated by them.



rate research

Read More

The simulation of tactile sensation using haptic devices is increasingly investigated in conjunction with simulation and training. In this paper we explore the most popular haptic frameworks and APIs. We provide a comprehensive review and comparison of their features and capabilities, from the perspective of the need to develop a haptic simulator for medical training purposes. In order to compare the studied frameworks and APIs, we identified and applied a set of 11 criteria and we obtained a classification of platforms, from the perspective of our project. According to this classification, we used the best platform to develop a visual-haptic prototype for liver diagnostics.
Software engineers spend a substantial amount of time using Web search to accomplish software engineering tasks. Such search tasks include finding code snippets, API documentation, seeking help with debugging, etc. While debugging a bug or crash, one of the common practices of software engineers is to search for information about the associated error or exception traces on the internet. In this paper, we analyze query logs from a leading commercial general-purpose search engine (GPSE) such as Google, Yahoo! or Bing to carry out a large scale study of software exceptions. To the best of our knowledge, this is the first large scale study to analyze how Web search is used to find information about exceptions. We analyzed about 1 million exception related search queries from a random sample of 5 billion web search queries. To extract exceptions from unstructured query text, we built a novel and high-performance machine learning model with a F1-score of 0.82. Using the machine learning model, we extracted exceptions from raw queries and performed popularity, effort, success, query characteristic and web domain analysis. We also performed programming language-specific analysis to give a better view of the exception search behavior. These techniques can help improve existing methods, documentation and tools for exception analysis and prediction. Further, similar techniques can be applied for APIs, frameworks, etc.
Software vulnerabilities are usually caused by design flaws or implementation errors, which could be exploited to cause damage to the security of the system. At present, the most commonly used method for detecting software vulnerabilities is static analysis. Most of the related technologies work based on rules or code similarity (source code level) and rely on manually defined vulnerability features. However, these rules and vulnerability features are difficult to be defined and designed accurately, which makes static analysis face many challenges in practical applications. To alleviate this problem, some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve the intelligence of detection. However, there are many types of neural networks, and different data preprocessing methods will have a significant impact on model performance. It is a great challenge for engineers and researchers to choose a proper neural network and data preprocessing method for a given problem. To solve this problem, we have conducted extensive experiments to test the performance of the two most typical neural networks (i.e., Bi-LSTM and RVFL) with the two most classical data preprocessing methods (i.e., the vector representation and the program symbolization methods) on software vulnerability detection problems and obtained a series of interesting research conclusions, which can provide valuable guidelines for researchers and engineers. Specifically, we found that 1) the training speed of RVFL is always faster than BiLSTM, but the prediction accuracy of Bi-LSTM model is higher than RVFL; 2) using doc2vec for vector representation can make the model have faster training speed and generalization ability than using word2vec; and 3) multi-level symbolization is helpful to improve the precision of neural network models.
Context: Given the acknowledged need to understand the people processes enacted during software development, software repositories and mailing lists have become a focus for many studies. However, researchers have tended to use mostly mathematical and frequency-based techniques to examine the software artifacts contained within them. Objective: There is growing recognition that these approaches uncover only a partial picture of what happens during software projects, and deeper contextual approaches may provide further understanding of the intricate nature of software teams dynamics. We demonstrate the relevance and utility of such approaches in this study. Method: We use psycholinguistics and directed content analysis (CA) to study the way project tasks drive teams attitudes and knowledge sharing. We compare the outcomes of these two approaches and offer methodological advice for researchers using similar forms of repository data. Results: Our analysis reveals significant differences in the way teams work given their portfolio of tasks and the distribution of roles. Conclusion: We overcome the limitations associated with employing purely quantitative approaches, while avoiding the time-intensive and potentially invasive nature of field work required in full case studies.
Being light-weight and cost-effective, IR-based approaches for bug localization have shown promise in finding software bugs. However, the accuracy of these approaches heavily depends on their used bug reports. A significant number of bug reports contain only plain natural language texts. According to existing studies, IR-based approaches cannot perform well when they use these bug reports as search queries. On the other hand, there is a piece of recent evidence that suggests that even these natural language-only reports contain enough good keywords that could help localize the bugs successfully. On one hand, these findings suggest that natural language-only bug reports might be a sufficient source for good query keywords. On the other hand, they cast serious doubt on the query selection practices in the IR-based bug localization. In this article, we attempted to clear the sky on this aspect by conducting an in-depth empirical study that critically examines the state-of-the-art query selection practices in IR-based bug localization. In particular, we use a dataset of 2,320 bug reports, employ ten existing approaches from the literature, exploit the Genetic Algorithm-based approach to construct optimal, near-optimal search queries from these bug reports, and then answer three research questions. We confirmed that the state-of-the-art query construction approaches are indeed not sufficient for constructing appropriate queries (for bug localization) from certain natural language-only bug reports although they contain such queries. We also demonstrate that optimal queries and non-optimal queries chosen from bug report texts are significantly different in terms of several keyword characteristics, which has led us to actionable insights. Furthermore, we demonstrate 27%--34% improvement in the performance of non-optimal queries through the application of our actionable insights to them.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا