ترغب بنشر مسار تعليمي؟ اضغط هنا

Updating Probabilities

148   0   0.0 ( 0 )
 نشر من قبل Peter D Grunwald
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a ``naive space, which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (coarsening at random) in the statistical literature characterizes when ``naive conditioning in a naive space works. We show that the CAR condition holds rather infrequently. We then consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, but show that there are no such conditions for MRE. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.



قيم البحث

اقرأ أيضاً

We consider how an agent should update her uncertainty when it is represented by a set P of probability distributions and the agent observes that a random variable X takes on value x, given that the agent makes decisions using the minimax criterion, perhaps the best-studied and most commonly-used criterion in the literature. We adopt a game-theoretic framework, where the agent plays against a bookie, who chooses some distribution from P. We consider two reasonable games that differ in what the bookie knows when he makes his choice. Anomalies that have been observed before, like time inconsistency, can be understood as arising because different games are being played, against bookies with different information. We characterize the important special cases in which the optimal decision rules according to the minimax criterion amount to either conditioning or simply ignoring the information. Finally, we consider the relationship between conditioning and calibration when uncertainty is described by sets of probabilities.
In this work we describe preferential Description Logics of typicality, a nonmonotonic extension of standard Description Logics by means of a typicality operator T allowing to extend a knowledge base with inclusions of the form T(C) v D, whose intuit ive meaning is that normally/typically Cs are also Ds. This extension is based on a minimal model semantics corresponding to a notion of rational closure, built upon preferential models. We recall the basic concepts underlying preferential Description Logics. We also present two extensions of the preferential semantics: on the one hand, we consider probabilistic extensions, based on a distributed semantics that is suitable for tackling the problem of commonsense concept combination, on the other hand, we consider other strengthening of the rational closure semantics and construction to avoid the so-called blocking of property inheritance problem.
252 - Alf C. Zimmer 2013
Theoretically as well as experimentally it is investigated how people represent their knowledge in order to make decisions or to share their knowledge with others. Experiment 1 probes into the ways how people 6ather information about the frequencies of events and how the requested response mode, that is, numerical vs. verbal estimates interferes with this knowledge. The least interference occurs if the subjects are allowed to give verbal responses. From this it is concluded that processing knowledge about uncertainty categorically, that is, by means of verbal expressions, imposes less mental work load on the decision matter than numerical processing. Possibility theory is used as a framework for modeling the individual usage of verbal categories for grades of uncertainty. The elastic constraints on the verbal expressions for every sing1e subject are determined in Experiment 2 by means of sequential calibration. In further experiments it is shown that the superiority of the verbal processing of knowledge about uncertainty guise generally reduces persistent biases reported in the literature: conservatism (Experiment 3) and neg1igence of regression (Experiment 4). The reanalysis of Hormanns data reveal that in verbal Judgments people exhibit sensitivity for base rates and are not prone to the conjunction fallacy. In a final experiment (5) about predictions in a real-life situation it turns out that in a numerical forecasting task subjects restricted themselves to those parts of their knowledge which are numerical. On the other hand subjects in a verbal forecasting task accessed verbally as well as numerically stated knowledge. Forecasting is structurally related to the estimation of probabilities for rare events insofar as supporting and contradicting arguments have to be evaluated and the choice of the final Judgment has to be Justified according to the evidence brought forward. In order to assist people in such choice situations a formal model for the interactive checking of arguments has been developed. The model transforms the normal-language quantifiers used in the arguments into fuzzy numbers and evaluates the given train of arguments by means of fuzzy numerica1 operations. Ambiguities in the meanings of quantifiers are resolved interactively.
We start with the distinction of outcome- and belief-based Bayesian models of the sequential update of agents beliefs and subjective reliability of sources (trust). We then focus on discussing the influential Bayesian model of belief-based trust upda te by Eric Olsson, which models dichotomic events and explicitly represents anti-reliability. After sketching some disastrous recent results for this perhaps most promising model of belief update, we show new simulation results for the temporal dynamics of learning belief with and without trust update and with and without communication. The results seem to shed at least a somewhat more positive light on the communicating-and-trust-updating agents. This may be a light at the end of the tunnel of belief-based models of trust updating, but the interpretation of the clear findings is much less clear.
We propose a nonmonotonic Description Logic of typicality able to account for the phenomenon of concept combination of prototypical concepts. The proposed logic relies on the logic of typicality ALC TR, whose semantics is based on the notion of ratio nal closure, as well as on the distributed semantics of probabilistic Description Logics, and is equipped with a cognitive heuristic used by humans for concept composition. We first extend the logic of typicality ALC TR by typicality inclusions whose intuitive meaning is that there is probability p about the fact that typical Cs are Ds. As in the distributed semantics, we define different scenarios containing only some typicality inclusions, each one having a suitable probability. We then focus on those scenarios whose probabilities belong to a given and fixed range, and we exploit such scenarios in order to ascribe typical properties to a concept C obtained as the combination of two prototypical concepts. We also show that reasoning in the proposed Description Logic is EXPTIME-complete as for the underlying ALC.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا