Do you want to publish a course? Click here

Toward a combination rule to deal with partial conflict and specificity in belief functions theory

106   0   0.0 ( 0 )
 Added by Arnaud Martin
 Publication date 2008
and research's language is English
 Authors Arnaud Martin




Ask ChatGPT about the research

We present and discuss a mixed conjunctive and disjunctive rule, a generalization of conflict repartition rules, and a combination of these two rules. In the belief functions theory one of the major problem is the conflict repartition enlightened by the famous Zadehs example. To date, many combination rules have been proposed in order to solve a solution to this problem. Moreover, it can be important to consider the specificity of the responses of the experts. Since few year some unification rules are proposed. We have shown in our previous works the interest of the proportional conflict redistribution rule. We propose here a mixed combination rule following the proportional conflict redistribution rule modified by a discounting procedure. This rule generalizes many combination rules.



rate research

Read More

220 - Arnaud Martin 2008
In this chapter, we present and discuss a new generalized proportional conflict redistribution rule. The Dezert-Smarandache extension of the Demster-Shafer theory has relaunched the studies on the combination rules especially for the management of the conflict. Many combination rules have been proposed in the last few years. We study here different combination rules and compare them in terms of decision on didactic example and on generated data. Indeed, in real applications, we need a reliable decision and it is the final results that matter. This chapter shows that a fine proportional conflict redistribution rule must be preferred for the combination in the belief function theory.
Much of human dialogue occurs in semi-cooperative settings, where agents with different goals attempt to agree on common decisions. Negotiations require complex communication and reasoning skills, but success is easy to measure, making this an interesting task for AI. We gather a large dataset of human-human negotiations on a multi-issue bargaining task, where agents who cannot observe each others reward functions must reach an agreement (or a deal) via natural language dialogue. For the first time, we show it is possible to train end-to-end models for negotiation, which must learn both linguistic and reasoning skills with no annotated dialogue states. We also introduce dialogue rollouts, in which the model plans ahead by simulating possible complete continuations of the conversation, and find that this technique dramatically improves performance. Our code and dataset are publicly available (https://github.com/facebookresearch/end-to-end-negotiator).
114 - Dorra Attiaoui 2015
Defining and modeling the relation of inclusion between continuous belief function may be considered as an important operation in order to study their behaviors. Within this paper we will propose and present two forms of inclusion: The strict and the partial one. In order to develop this relation, we will study the case of consonant belief function. To do so, we will simulate normal distributions allowing us to model and analyze these relations. Based on that, we will determine the parameters influencing and characterizing the two forms of inclusion.
The theory of belief functions manages uncertainty and also proposes a set of combination rules to aggregate opinions of several sources. Some combination rules mix evidential information where sources are independent; other rules are suited to combine evidential information held by dependent sources. In this paper we have two main contributions: First we suggest a method to quantify sources degree of independence that may guide the choice of the more appropriate set of combination rules. Second, we propose a new combination rule that takes consideration of sources degree of independence. The proposed method is illustrated on generated mass functions.
Current technology for autonomous cars primarily focuses on getting the passenger from point A to B. Nevertheless, it has been shown that passengers are afraid of taking a ride in self-driving cars. One way to alleviate this problem is by allowing the passenger to give natural language commands to the car. However, the car can misunderstand the issued command or the visual surroundings which could lead to uncertain situations. It is desirable that the self-driving car detects these situations and interacts with the passenger to solve them. This paper proposes a model that detects uncertain situations when a command is given and finds the visual objects causing it. Optionally, a question generated by the system describing the uncertain objects is included. We argue that if the car could explain the objects in a human-like way, passengers could gain more confidence in the cars abilities. Thus, we investigate how to (1) detect uncertain situations and their underlying causes, and (2) how to generate clarifying questions for the passenger. When evaluating on the Talk2Car dataset, we show that the proposed model, acrfull{pipeline}, improves gls{m:ambiguous-absolute-increase} in terms of $IoU_{.5}$ compared to not using gls{pipeline}. Furthermore, we designed a referring expression generator (REG) acrfull{reg_model} tailored to a self-driving car setting which yields a relative improvement of gls{m:meteor-relative} METEOR and gls{m:rouge-relative} ROUGE-l compared with state-of-the-art REG models, and is three times faster.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا