ترغب بنشر مسار تعليمي؟ اضغط هنا

A new generalization of the proportional conflict redistribution rule stable in terms of decision

220   0   0.0 ( 0 )
 نشر من قبل Arnaud Martin
 تاريخ النشر 2008
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Arnaud Martin




اسأل ChatGPT حول البحث

In this chapter, we present and discuss a new generalized proportional conflict redistribution rule. The Dezert-Smarandache extension of the Demster-Shafer theory has relaunched the studies on the combination rules especially for the management of the conflict. Many combination rules have been proposed in the last few years. We study here different combination rules and compare them in terms of decision on didactic example and on generated data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. This chapter shows that a fine proportional conflict redistribution rule must be preferred for the combination in the belief function theory.



قيم البحث

اقرأ أيضاً

98 - Arnaud Martin 2008
In this chapter, we present two applications in information fusion in order to evaluate the generalized proportional conflict redistribution rule presented in the chapter cite{Martin06a}. Most of the time the combination rules are evaluated only on s imple examples. We study here different combination rules and compare them in terms of decision on real data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. Two applications are presented here: a fusion of human experts opinions on the kind of underwater sediments depict on sonar image and a classifier fusion for radar targets recognition.
We present in this paper some examples of how to compute by hand the PCR5 fusion rule for three sources, so the reader will better understand its mechanism. We also take into consideration the importance of sources, which is different from the classical discounting of sources.
148 - Qian Hu , Keyun Qin 2021
This paper mainly studies the rule acquisition and attribute reduction for formal decision context based on two new kinds of decision rules, namely I-decision rules and II-decision rules. The premises of these rules are object-oriented concepts, and the conclusions are formal concept and property-oriented concept respectively. The rule acquisition algorithms for I-decision rules and II-decision rules are presented. Some comparative analysis of these algorithms with the existing algorithms are examined which shows that the algorithms presented in this study behave well. The attribute reduction approaches to preserve I-decision rules and II-decision rules are presented by using discernibility matrix.
Considering the high heterogeneity of the ontologies pub-lished on the web, ontology matching is a crucial issue whose aim is to establish links between an entity of a source ontology and one or several entities from a target ontology. Perfectible si milarity measures, consid-ered as sources of information, are combined to establish these links. The theory of belief functions is a powerful mathematical tool for combining such uncertain information. In this paper, we introduce a decision pro-cess based on a distance measure to identify the best possible matching entities for a given source entity.
The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a li mitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. %Therefore, we need explanations that reveals the reasons why a predictor takes a certain decision. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instances features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا