Do you want to publish a course? Click here

Exploiting Functional Dependencies in Qualitative Probabilistic Reasoning

426   0   0.0 ( 0 )
 Added by Michael P. Wellman
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

Functional dependencies restrict the potential interactions among variables connected in a probabilistic network. This restriction can be exploited in qualitative probabilistic reasoning by introducing deterministic variables and modifying the inference rules to produce stronger conclusions in the presence of functional relations. I describe how to accomplish these modifications in qualitative probabilistic networks by exhibiting the update procedures for graphical transformations involving probabilistic and deterministic variables and combinations. A simple example demonstrates that the augmented scheme can reduce qualitative ambiguity that would arise without the special treatment of functional dependency. Analysis of qualitative synergy reveals that new higher-order relations are required to reason effectively about synergistic interactions among deterministic variables.



rate research

Read More

135 - Michael P. Wellman 2013
Bayesian networks provide a probabilistic semantics for qualitative assertions about likelihood. A qualitative reasoner based on an algebra over these assertions can derive further conclusions about the influence of actions. While the conclusions are much weaker than those computed from complete probability distributions, they are still valuable for suggesting potential actions, eliminating obviously inferior plans, identifying important tradeoffs, and explaining probabilistic models.
111 - Xinde Li 2007
This paper deals with enriched qualitative belief functions for reasoning under uncertainty and for combining information expressed in natural language through linguistic labels. In this work, two possible enrichments (quantitative and/or qualitative) of linguistic labels are considered and operators (addition, multiplication, division, etc) for dealing with them are proposed and explained. We denote them $qe$-operators, $qe$ standing for qualitative-enriched operators. These operators can be seen as a direct extension of the classical qualitative operators ($q$-operators) proposed recently in the Dezert-Smarandache Theory of plausible and paradoxist reasoning (DSmT). $q$-operators are also justified in details in this paper. The quantitative enrichment of linguistic label is a numerical supporting degree in $[0,infty)$, while the qualitative enrichment takes its values in a finite ordered set of linguistic values. Quantitative enrichment is less precise than qualitative enrichment, but it is expected more close with what human experts can easily provide when expressing linguistic labels with supporting degrees. Two simple examples are given to show how the fusion of qualitative-enriched belief assignments can be done.
We extend the theory of d-separation to cases in which data instances are not independent and identically distributed. We show that applying the rules of d-separation directly to the structure of probabilistic models of relational data inaccurately infers conditional independence. We introduce relational d-separation, a theory for deriving conditional independence facts from relational models. We provide a new representation, the abstract ground graph, that enables a sound, complete, and computationally efficient method for answering d-separation queries about relational models, and we present empirical results that demonstrate effectiveness.
Markov Logic Networks (MLNs), which elegantly combine logic rules and probabilistic graphical models, can be used to address many knowledge graph problems. However, inference in MLN is computationally intensive, making the industrial-scale application of MLN very difficult. In recent years, graph neural networks (GNNs) have emerged as efficient and effective tools for large-scale graph problems. Nevertheless, GNNs do not explicitly incorporate prior logic rules into the models, and may require many labeled examples for a target task. In this paper, we explore the combination of MLNs and GNNs, and use graph neural networks for variational inference in MLN. We propose a GNN variant, named ExpressGNN, which strikes a nice balance between the representation power and the simplicity of the model. Our extensive experiments on several benchmark datasets demonstrate that ExpressGNN leads to effective and efficient probabilistic logic reasoning.
In this thesis, we introduce a novel formal framework to represent and reason about qualitative direction and distance relations between extended objects using Answer Set Programming (ASP). We take Cardinal Directional Calculus (CDC) as a starting point and extend CDC with new sorts of constraints which involve defaults, preferences and negation. We call this extended version as nCDC. Then we further extend nCDC by augmenting qualitative distance relation and name this extension as nCDC+. For CDC, nCDC, nCDC+, we introduce an ASP-based general framework to solve consistency checking problems, address composition and inversion of qualitative spatial relations, infer unknown or missing relations between objects, and find a suitable configuration of objects which fulfills a given inquiry.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا