Do you want to publish a course? Click here

OrgMining 2.0: A Novel Framework for Organizational Model Mining from Event Logs

80   0   0.0 ( 0 )
 Added by Jing Yang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Providing appropriate structures around human resources can streamline operations and thus facilitate the competitiveness of an organization. To achieve this goal, modern organizations need to acquire an accurate and timely understanding of human resource grouping while faced with an ever-changing environment. The use of process mining offers a promising way to help address the need through utilizing event log data stored in information systems. By extracting knowledge about the actual behavior of resources participating in business processes from event logs, organizational models can be constructed, which facilitate the analysis of the de facto grouping of human resources relevant to process execution. Nevertheless, open research gaps remain to be addressed when applying the state-of-the-art process mining to analyze resource grouping. For one, the discovery of organizational models has only limited connections with the context of process execution. For another, a rigorous solution that evaluates organizational models against event log data is yet to be proposed. In this paper, we aim to tackle these research challenges by developing a novel framework built upon a richer definition of organizational models coupling resource grouping with process execution knowledge. By introducing notions of conformance checking for organizational models, the framework allows effective evaluation of organizational models, and therefore provides a foundation for analyzing and improving resource grouping based on event logs. We demonstrate the feasibility of this framework by proposing an approach underpinned by the framework for organizational model discovery, and also conduct experiments on real-life event logs to discover and evaluate organizational models.



rate research

Read More

Interactive tools make data analysis more efficient and more accessible to end-users by hiding the underlying query complexity and exposing interactive widgets for the parts of the query that matter to the analysis. However, creating custom tailored (i.e., precise) interfaces is very costly, and automated approaches are desirable. We propose a syntactic approach that uses queries from an analysis to generate a tailored interface. We model interface widgets as functions I(q) -> q that modify the current analysis query $q$, and interfaces as the set of queries that its widgets can express. Our system, Precision Interfaces, analyzes structural changes between input queries from an analysis, and generates an output interface with widgets to express those changes. Our experiments on the Sloan Digital Sky Survey query log suggest that Precision Interfaces can generate useful interfaces for simple unanticipated tasks, and our optimizations can generate interfaces from logs of up to 10,000 queries in <10s.
Interactive tools make data analysis both more efficient and more accessible to a broad population. Simple interfaces such as Google Finance as well as complex visual exploration interfaces such as Tableau are effective because they are tailored to the desired user tasks. Yet, designing interactive interfaces requires technical expertise and domain knowledge. Experts are scarce and expensive, and therefore it is currently infeasible to provide tailored (or precise) interfaces for every user and every task. We envision a data-driven approach to generate tailored interactive interfaces. We observe that interactive interfaces are designed to express sets of programs; thus, samples of programs-increasingly collected by data systems-may help us build interactive interfaces. Based on this idea, Precision Interfaces is a language-agnostic system that examines an input query log, identifies how the queries structurally change, and generates interactive web interfaces to express these changes. The focus of this paper is on applying this idea towards logs of structured queries. Our experiments show that Precision Interfaces can support multiple query languages (SQL and SPARQL), derive Tableaus salient interaction components from OLAP queries, analyze <75k queries in <12 minutes, and generate interaction designs that improve upon existing interfaces and are comparable to human-crafted interfaces.
The insights revealed from process mining heavily rely on the quality of event logs. Activities extracted from healthcare information systems with the free-text nature may lead to inconsistent labels. Such inconsistency would then lead to redundancy of activity labels, which refer to labels that have different syntax but share the same behaviours. The identifications of these labels from data-driven process discovery are difficult and rely heavily on resource-intensive human review. Existing work achieves low accuracy either redundant activity labels are in low occurrence frequency or the existence of numerical data values as attributes in event logs. However, these phenomena are commonly observed in healthcare information systems. In this paper, we propose an approach to detect redundant activity labels using control-flow relations and numerical data values from event logs. Natural Language Processing is also integrated into our method to assess semantic similarity between labels, which provides users with additional insights. We have evaluated our approach through synthetic logs generated from the real-life Sepsis log and a case study using the MIMIC-III data set. The results demonstrate that our approach can successfully detect redundant activity labels. This approach can add value to the preprocessing step to generate more representative event logs for process mining tasks in the healthcare domain.
319 - Sudeepa Roy , Babak Salimi 2017
The study of causality or causal inference - how much a given treatment causally affects a given outcome in a population - goes way beyond correlation or association analysis of variables, and is critical in making sound data driven decisions and policies in a multitude of applications. The gold standard in causal inference is performing controlled experiments, which often is not possible due to logistical or ethical reasons. As an alternative, inferring causality on observational data based on the Neyman-Rubin potential outcome model has been extensively used in statistics, economics, and social sciences over several decades. In this paper, we present a formal framework for sound causal analysis on observational datasets that are given as multiple relations and where the population under study is obtained by joining these base relations. We study a crucial condition for inferring causality from observational data, called the strong ignorability assumption (the treatment and outcome variables should be independent in the joined relation given the observed covariates), using known conditional independences that hold in the base relations. We also discuss how the structure of the conditional independences in base relations given as graphical models help infer new conditional independences in the joined relation. The proposed framework combines concepts from databases, statistics, and graphical models, and aims to initiate new research directions spanning these fields to facilitate powerful data-driven decisions in todays big data world.
The applicability of process mining techniques hinges on the availability of event logs capturing the execution of a business process. In some use cases, particularly those involving customer-facing processes, these event logs may contain private information. Data protection regulations restrict the use of such event logs for analysis purposes. One way of circumventing these restrictions is to anonymize the event log to the extent that no individual can be singled out using the anonymized log. This paper addresses the problem of anonymizing an event log in order to guarantee that, upon disclosure of the anonymized log, the probability that an attacker may single out any individual represented in the original log, does not increase by more than a threshold. The paper proposes a differentially private disclosure mechanism, which oversamples the cases in the log and adds noise to the timestamps to the extent required to achieve the above privacy guarantee. The paper reports on an empirical evaluation of the proposed approach using 14 real-life event logs in terms of data utility loss and computational efficiency.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا