ترغب بنشر مسار تعليمي؟ اضغط هنا

A Formal Framework to Characterize Interpretability of Procedures

109   0   0.0 ( 0 )
 نشر من قبل Amit Dhurandhar
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We provide a novel notion of what it means to be interpretable, looking past the usual association with human understanding. Our key insight is that interpretability is not an absolute concept and so we define it relative to a target model, which may or may not be a human. We define a framework that allows for comparing interpretable procedures by linking it to important practical aspects such as accuracy and robustness. We characterize many of the current state-of-the-art interpretable methods in our framework portraying its general applicability.

قيم البحث

اقرأ أيضاً

Managing inputs that are novel, unknown, or out-of-distribution is critical as an agent moves from the lab to the open world. Novelty-related problems include being tolerant to novel perturbations of the normal input, detecting when the input include s novel items, and adapting to novel inputs. While significant research has been undertaken in these areas, a noticeable gap exists in the lack of a formalized definition of novelty that transcends problem domains. As a team of researchers spanning multiple research groups and different domains, we have seen, first hand, the difficulties that arise from ill-specified novelty problems, as well as inconsistent definitions and terminology. Therefore, we present the first unified framework for formal theories of novelty and use the framework to formally define a family of novelty types. Our framework can be applied across a wide range of domains, from symbolic AI to reinforcement learning, and beyond to open world image recognition. Thus, it can be used to help kick-start new research efforts and accelerate ongoing work on these important novelty-related problems. This extended version of our AAAI 2021 paper included more details and examples in multiple domains.
60 - Martin Biehl 2017
This thesis contributes to the formalisation of the notion of an agent within the class of finite multivariate Markov chains. Agents are seen as entities that act, perceive, and are goal-directed. We present a new measure that can be used to identi fy entities (called $iota$-entities), some general requirements for entities in multivariate Markov chains, as well as formal definitions of actions and perceptions suitable for such entities. The intuition behind $iota$-entities is that entities are spatiotemporal patterns for which every part makes every other part more probable. The measure, complete local integration (CLI), is formally investigated in general Bayesian networks. It is based on the specific local integration (SLI) which is measured with respect to a partition. CLI is the minimum value of SLI over all partitions. We prove that $iota$-entities are blocks in specific partitions of the global trajectory. These partitions are the finest partitions that achieve a given SLI value. We also establish the transformation behaviour of SLI under permutations of nodes in the network. We go on to present three conditions on general definitions of entities. These are not fulfilled by sets of random variables i.e. the perception-action loop, which is often used to model agents, is too restrictive. We propose that any general entity definition should in effect specify a subset (called an an entity-set) of the set of all spatiotemporal patterns of a given multivariate Markov chain. The set of $iota$-entities is such a set. Importantly the perception-action loop also induces an entity-set. We then propose formal definitions of actions and perceptions for arbitrary entity-sets. These specialise to standard notions in case of the perception-action loop entity-set. Finally we look at some very simple examples.
Because word semantics can substantially change across communities and contexts, capturing domain-specific word semantics is an important challenge. Here, we propose SEMAXIS, a simple yet powerful framework to characterize word semantics using many s emantic axes in word- vector spaces beyond sentiment. We demonstrate that SEMAXIS can capture nuanced semantic representations in multiple online communities. We also show that, when the sentiment axis is examined, SEMAXIS outperforms the state-of-the-art approaches in building domain-specific sentiment lexicons.
Guided troubleshooting is an inherent task in the domain of technical support services. When a customer experiences an issue with the functioning of a technical service or a product, an expert user helps guide the customer through a set of steps comp rising a troubleshooting procedure. The objective is to identify the source of the problem through a set of diagnostic steps and observations, and arrive at a resolution. Procedures containing these set of diagnostic steps and observations in response to different problems are common artifacts in the body of technical support documentation. The ability to use machine learning and linguistics to understand and leverage these procedures for applications like intelligent chatbots or robotic process automation, is crucial. Existing research on question answering or intelligent chatbots does not look within procedures or deep-understand them. In this paper, we outline a system for mining procedures from technical support documents. We create models for solving important subproblems like extraction of procedures, identifying decision points within procedures, identifying blocks of instructions corresponding to these decision points and mapping instructions within a decision block. We also release a dataset containing our manual annotations on publicly available support documents, to promote further research on the problem.
The explorative mind-map is a dynamic framework, that emerges automatically from the input, it gets. It is unlike a verificative modeling system where existing (human) thoughts are placed and connected together. In this regard, explorative mind-maps change their size continuously, being adaptive with connectionist cells inside; mind-maps process data input incrementally and offer lots of possibilities to interact with the user through an appropriate communication interface. With respect to a cognitive motivated situation like a conversation between partners, mind-maps become interesting as they are able to process stimulating signals whenever they occur. If these signals are close to an own understanding of the world, then the conversational partner becomes automatically more trustful than if the signals do not or less match the own knowledge scheme. In this (position) paper, we therefore motivate explorative mind-maps as a cognitive engine and propose these as a decision support engine to foster trust.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا