Do you want to publish a course? Click here

Extracting Optimal Explanations for Ensemble Trees via Logical Reasoning

98   0   0.0 ( 0 )
 Added by Gelin Zhang
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Ensemble trees are a popular machine learning model which often yields high prediction performance when analysing structured data. Although individual small decision trees are deemed explainable by nature, an ensemble of large trees is often difficult to understand. In this work, we propose an approach called optimised explanation (OptExplain) that faithfully extracts global explanations of ensemble trees using a combination of logical reasoning, sampling and optimisation. Building on top of this, we propose a method called the profile of equivalent classes (ProClass), which uses MAX-SAT to simplify the explanation even further. Our experimental study on several datasets shows that our approach can provide high-quality explanations to large ensemble trees models, and it betters recent top-performers.



rate research

Read More

We introduce an extension of Hoare logic for call-by-value higher-order functions with ML-like local reference generation. Local references may be generated dynamically and exported outside their scope, may store higher-order functions and may be used to construct complex mutable data structures. This primitive is captured logically using a predicate asserting reachability of a reference name from a possibly higher-order datum and quantifiers over hidden references. We explore the logics descriptive and reasoning power with non-trivial programming examples combining higher-order procedures and dynamically generated local state. Axioms for reachability and local invariant play a central role for reasoning about the examples.
Graded modal types systems and coeffects are becoming a standard formalism to deal with context-dependent computations where code usage plays a central role. The theory of program equivalence for modal and coeffectful languages, however, is considerably underdeveloped if compared to the denotational and operational semantics of such languages. This raises the question of how much of the theory of ordinary program equivalence can be given in a modal scenario. In this work, we show that coinductive equivalences can be extended to a modal setting, and we do so by generalising Abramskys applicative bisimilarity to coeffectful behaviours. To achieve this goal, we develop a general theory of ternary program relations based on the novel notion of a comonadic lax extension, on top of which we define a modal extension of Abramskys applicative bisimilarity (which we dub modal applicative bisimilarity). We prove such a relation to be a congruence, this way obtaining a compositional technique for reasoning about modal and coeffectful behaviours. But this is not the end of the story: we also establish a correspondence between modal program relations and program distances. This correspondence shows that modal applicative bisimilarity and (a properly extended) applicative bisimilarity distance coincide, this way revealing that modal program equivalences and program distances are just two sides of the same coin.
Since the proof of the four color theorem in 1976, computer-generated proofs have become a reality in mathematics and computer science. During the last decade, we have seen formal proofs using verified proof assistants being used to verify the validity of such proofs. In this paper, we describe a formalized theory of size-optimal sorting networks. From this formalization we extract a certified checker that successfully verifies computer-generated proofs of optimality on up to 8 inputs. The checker relies on an untrusted oracle to shortcut the search for witnesses on more than 1.6 million NP-complete subproblems.
Logical reasoning, which is closely related to human cognition, is of vital importance in humans understanding of texts. Recent years have witnessed increasing attentions on machines logical reasoning abilities. However, previous studies commonly apply ad-hoc methods to model pre-defined relation patterns, such as linking named entities, which only considers global knowledge components that are related to commonsense, without local perception of complete facts or events. Such methodology is obviously insufficient to deal with complicated logical structures. Therefore, we argue that the natural logic units would be the group of backbone constituents of the sentence such as the subject-verb-object formed facts, covering both global and local knowledge pieces that are necessary as the basis for logical reasoning. Beyond building the ad-hoc graphs, we propose a more general and convenient fact-driven approach to construct a supergraph on top of our newly defined fact units, and enhance the supergraph with further explicit guidance of local question and option interactions. Experiments on two challenging logical reasoning benchmark datasets, ReClor and LogiQA, show that our proposed model, textsc{Focal Reasoner}, outperforms the baseline models dramatically. It can also be smoothly applied to other downstream tasks such as MuTual, a dialogue reasoning dataset, achieving competitive results.
Many commonsense reasoning NLP tasks involve choosing between one or more possible answers to a question or prompt based on knowledge that is often implicit. Large pretrained language models (PLMs) can achieve near-human performance on such tasks, while providing little human-interpretable evidence of the underlying reasoning they use. In this work, we show how to use these same models to generate such evidence: inspired by the contrastive nature of human explanations, we use PLMs to complete explanation prompts which contrast alternatives according to the key attribute(s) required to justify the correct answer (for example, peanuts are usually salty while raisins are sweet). Conditioning model decisions on these explanations improves performance on two commonsense reasoning benchmarks, as compared to previous non-contrastive alternatives. These explanations are also judged by humans to be more relevant for solving the task, and facilitate a novel method to evaluate explanation faithfulfness.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا