ترغب بنشر مسار تعليمي؟ اضغط هنا

Domain-Liftability of Relational Marginal Polytopes

116   0   0.0 ( 0 )
 نشر من قبل Ondrej Kuzelka
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We study computational aspects of relational marginal polytopes which are statistical relational learning counterparts of marginal polytopes, well-known from probabilistic graphical models. Here, given some first-order logic formula, we can define its relational marginal statistic to be the fraction of groundings that make this formula true in a given possible world. For a list of first-order logic formulas, the relational marginal polytope is the set of all points that correspond to the expected values of the relational marginal statistics that are realizable. In this paper, we study the following two problems: (i) Do domain-liftability results for the partition functions of Markov logic networks (MLNs) carry over to the problem of relational marginal polytope construction? (ii) Is the relational marginal polytope containment problem hard under some plausible complexity-theoretic assumptions? Our positive results have consequences for lifted weight learning of MLNs. In particular, we show that weight learning of MLNs is domain-liftable whenever the computation of the partition function of the respective MLNs is domain-liftable (this result has not been rigorously proven before).



قيم البحث

اقرأ أيضاً

Explainability is essential for autonomous vehicles and other robotics systems interacting with humans and other objects during operation. Humans need to understand and anticipate the actions taken by the machines for trustful and safe cooperation. I n this work, we aim to enable the explainability of an autonomous driving system at the design stage by incorporating expert domain knowledge into the model. We propose Grounded Relational Inference (GRI). It models an interactive systems underlying dynamics by inferring an interaction graph representing the agents relations. We ensure an interpretable interaction graph by grounding the relational latent space into semantic behaviors defined with expert domain knowledge. We demonstrate that it can model interactive traffic scenarios under both simulation and real-world settings, and generate interpretable graphs explaining the vehicles behavior by their interactions.
126 - Thomas Kahle 2008
A neighborliness property of marginal polytopes of hierarchical models, depending on the cardinality of the smallest non-face of the underlying simplicial complex, is shown. The case of binary variables is studied explicitly, then the general case is reduced to the binary case. A Markov basis for binary hierarchical models whose simplicial complexes is the complement of an interval is given.
In this paper, we explore a connection between binary hierarchical models, their marginal polytopes and codeword polytopes, the convex hulls of linear codes. The class of linear codes that are realizable by hierarchical models is determined. We class ify all full dimensional polytopes with the property that their vertices form a linear code and give an algorithm that determines them.
The existence of the maximum likelihood estimate in hierarchical loglinear models is crucial to the reliability of inference for this model. Determining whether the estimate exists is equivalent to finding whether the sufficient statistics vector $t$ belongs to the boundary of the marginal polytope of the model. The dimension of the smallest face $F_t$ containing $t$ determines the dimension of the reduced model which should be considered for correct inference. For higher-dimensional problems, it is not possible to compute $F_{t}$ exactly. Massam and Wang (2015) found an outer approximation to $F_t$ using a collection of sub-models of the original model. This paper refines the methodology to find an outer approximation and devises a new methodology to find an inner approximation. The inner approximation is given not in terms of a face of the marginal polytope, but in terms of a subset of the vertices of $F_t$. Knowing $F_t$ exactly indicates which cell probabilities have maximum likelihood estimates equal to $0$. When $F_t$ cannot be obtained exactly, we can use, first, the outer approximation $F_2$ to reduce the dimension of the problem and, then, the inner approximation $F_1$ to obtain correct estimates of cell probabilities corresponding to elements of $F_1$ and improve the estimates of the remaining probabilities corresponding to elements in $F_2setminus F_1$. Using both real-world and simulated data, we illustrate our results, and show that our methodology scales to high dimensions.
Our goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available: examples are labeled with the correct execution result, but not the program itself. Consequently, we mu st search the space of programs for those that output the correct result, while not being misled by spurious programs: incorrect programs that coincidentally output the correct result. We connect two common learning paradigms, reinforcement learning (RL) and maximum marginal likelihood (MML), and then present a new learning algorithm that combines the strengths of both. The new algorithm guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL, and by updating parameters such that probability is spread more evenly across consistent programs. We apply our learning algorithm to a new neural semantic parser and show significant gains over existing state-of-the-art results on a recent context-dependent semantic parsing task.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا