ترغب بنشر مسار تعليمي؟ اضغط هنا

Towards Multi-perspective conformance checking with fuzzy sets

76   0   0.0 ( 0 )
 نشر من قبل Sicui Zhang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Sicui Zhang




اسأل ChatGPT حول البحث

Conformance checking techniques are widely adopted to pinpoint possible discrepancies between process models and the execution of the process in reality. However, state of the art approaches adopt a crisp evaluation of deviations, with the result that small violations are considered at the same level of significant ones. This affects the quality of the provided diagnostics, especially when there exists some tolerance with respect to reasonably small violations, and hampers the flexibility of the process. In this work, we propose a novel approach which allows to represent actors tolerance with respect to violations and to account for severity of deviations when assessing executions compliance. We argue that besides improving the quality of the provided diagnostics, allowing some tolerance in deviations assessment also enhances the flexibility of conformance checking techniques and, indirectly, paves the way for improving the resilience of the overall process management system.



قيم البحث

اقرأ أيضاً

This paper presents a command-line tool, called Entropia, that implements a family of conformance checking measures for process mining founded on the notion of entropy from information theory. The measures allow quantifying classical non-deterministi c and stochastic precision and recall quality criteria for process models automatically discovered from traces executed by IT-systems and recorded in their event logs. A process model has good precision with respect to the log it was discovered from if it does not encode many traces that are not part of the log, and has good recall if it encodes most of the traces from the log. By definition, the measures possess useful properties and can often be computed quickly.
Motivated by the application problem of sensor fusion the author introduced the concept of graded set. It is reasoned that in classification problem arising in an information system (represented by information table), a novel set called Granular set naturally arises. It is realized that in any hierarchical classification problem, Granular set naturally arises. Also when the target set of objects forms a graded set the lower and upper approximations of target sets form a graded set. This generalizes the concept of rough set. It is hoped that a detailed theory of granular/ graded sets finds several applications.
436 - Nicolas Bus 2019
Manually checking models for compliance against building regulation is a time-consuming task for architects and construction engineers. There is thus a need for algorithms that process information from construction projects and report non-compliant e lements. Still automated code-compliance checking raises several obstacles. Building regulations are usually published as human readable texts and their content is often ambiguous or incomplete. Also, the vocabulary used for expressing such regulations is very different from the vocabularies used to express Building Information Models (BIM). Furthermore, the high level of details associated to BIM-contained geometries induces complex calculations. Finally, the level of complexity of the IFC standard also hinders the automation of IFC processing tasks. Model chart, formal rules and pre-processors approach allows translating construction regulations into semantic queries. We further demonstrate the usefulness of this approach through several use cases. We argue our approach is a step forward in bridging the gap between regulation texts and automated checking algorithms. Finally with the recent building ontology BOT recommended by the W3C Linked Building Data Community Group, we identify perspectives for standardizing and extending our approach.
96 - Yuxiang Sun , Bo Yuan , Yufan Xue 2021
Researchers are increasingly focusing on intelligent games as a hot research area.The article proposes an algorithm that combines the multi-attribute management and reinforcement learning methods, and that combined their effect on wargaming, it solve s the problem of the agents low rate of winning against specific rules and its inability to quickly converge during intelligent wargame training.At the same time, this paper studied a multi-attribute decision making and reinforcement learning algorithm in a wargame simulation environment, and obtained data on red and blue conflict.Calculate the weight of each attribute based on the intuitionistic fuzzy number weight calculations. Then determine the threat posed by each opponents chess pieces.Using the red side reinforcement learning reward function, the AC framework is trained on the reward function, and an algorithm combining multi-attribute decision-making with reinforcement learning is obtained. A simulation experiment confirms that the algorithm of multi-attribute decision-making combined with reinforcement learning presented in this paper is significantly more intelligent than the pure reinforcement learning algorithm.By resolving the shortcomings of the agents neural network, coupled with sparse rewards in large-map combat games, this robust algorithm effectively reduces the difficulties of convergence. It is also the first time in this field that an algorithm design for intelligent wargaming combines multi-attribute decision making with reinforcement learning.Attempt interdisciplinary cross-innovation in the academic field, like designing intelligent wargames and improving reinforcement learning algorithms.
To inhibit the spread of rumorous information and its severe consequences, traditional fact checking aims at retrieving relevant evidence to verify the veracity of a given claim. Fact checking methods typically use knowledge graphs (KGs) as external repositories and develop reasoning mechanism to retrieve evidence for verifying the triple claim. However, existing methods only focus on verifying a single claim. As real-world rumorous information is more complex and a textual statement is often composed of multiple clauses (i.e. represented as multiple claims instead of a single one), multiclaim fact checking is not only necessary but more important for practical applications. Although previous methods for verifying a single triple can be applied repeatedly to verify multiple triples one by one, they ignore the contextual information implied in a multi-claim statement and could not learn the rich semantic information in the statement as a whole. In this paper, we propose an end-to-end knowledge enhanced learning and verification method for multi-claim fact checking. Our method consists of two modules, KG-based learning enhancement and multi-claim semantic composition. To fully utilize the contextual information, the KG-based learning enhancement module learns the dynamic context-specific representations via selectively aggregating relevant attributes of entities. To capture the compositional semantics of multiple triples, the multi-claim semantic composition module constructs the graph structure to model claim-level interactions, and integrates global and salient local semantics with multi-head attention. Experimental results on a real-world dataset and two benchmark datasets show the effectiveness of our method for multi-claim fact checking over KG.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا