ﻻ يوجد ملخص باللغة العربية
Recently, researchers in answer set programming and constraint programming spent significant efforts in the development of hybrid languages and solving algorithms combining the strengths of these traditionally separate fields. These efforts resulted in a new research area: constraint answer set programming (CASP). CASP languages and systems proved to be largely successful at providing efficient solutions to problems involving hybrid reasoning tasks, such as scheduling problems with elements of planning. Yet, the development of CASP systems is difficult, requiring non-trivial expertise in multiple areas. This suggests a need for a study identifying general development principles of hybrid systems. Once these principles and their implications are well understood, the development of hybrid languages and systems may become a well-established and well-understood routine process. As a step in this direction, in this paper we conduct a case study aimed at evaluating various integration schemas of CASP methods.
Black box systems for automated decision making, often based on machine learning over (big) data, map a users features into a class or a score without exposing the reasons why. This is problematic not only for lack of transparency, but also for possi
There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused o
Recently Bonet and Geffner have shown that first-order representations for planning domains can be learned from the structure of the state space without any prior knowledge about the action schemas or domain predicates. For this, the learning problem
The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a li
Analyzing large-scale, multi-experiment studies requires scientists to test each experimental outcome for statistical significance and then assess the results as a whole. We present Black Box FDR (BB-FDR), an empirical-Bayes method for analyzing mult