No Arabic abstract
The formalization of action and obligation using logic languages is a topic of increasing relevance in the field of ethics for AI. Having an expressive syntactic and semantic framework to reason about agents decisions in moral situations allows for unequivocal representations of components of behavior that are relevant when assigning blame (or praise) of outcomes to said agents. Two very important components of behavior in this respect are belief and belief-based action. In this work we present a logic of doxastic oughts by extending epistemic deontic stit theory with beliefs. On one hand, the semantics for formulas involving belief operators is based on probability measures. On the other, the semantics for doxastic oughts relies on a notion of optimality, and the underlying choice rule is maximization of expected utility. We introduce an axiom system for the resulting logic, and we address its soundness, completeness, and decidability results. These results are significant in the line of research that intends to use proof systems of epistemic, doxastic, and deontic logics to help in the testing of ethical behavior of AI through theorem-proving and model-checking.
We consider the pressing question of how to model, verify, and ensure that autonomous systems meet certain textit{obligations} (like the obligation to respect traffic laws), and refrain from impermissible behavior (like recklessly changing lanes). Temporal logics are heavily used in autonomous system design; however, as we illustrate here, temporal (alethic) logics alone are inappropriate for reasoning about obligations of autonomous systems. This paper proposes the use of Dominance Act Utilitarianism (DAU), a deontic logic of agency, to encode and reason about obligations of autonomous systems. We use DAU to analyze Intels Responsibility-Sensitive Safety (RSS) proposal as a real-world case study. We demonstrate that DAU can express well-posed RSS rules, formally derive undesirable consequences of these rules, illustrate how DAU could help design systems that have specific obligations, and how to model-check DAU obligations.
In this paper, we explore how, and if, free choice permission (FCP) can be accepted when we consider deontic conflicts between certain types of permissions and obligations. As is well known, FCP can license, under some minimal conditions, the derivation of an indefinite number of permissions. We discuss this and other drawbacks and present six Hilbert-style classical deontic systems admitting a guarded version of FCP. The systems that we present are not too weak from the inferential viewpoint, as far as permission is concerned, and do not commit to weakening any specific logic for obligations.
We propose two alternatives to Xus axiomatization of the Chellas STIT. The first one also provides an alternative axiomatization of the deliberative STIT. The second one starts from the idea that the historic necessity operator can be defined as an abbreviation of operators of agency, and can thus be eliminated from the logic of the Chellas STIT. The second axiomatization also allows us to establish that the problem of deciding the satisfiability of a STIT formula without temporal operators is NP-complete in the single-agent case, and is NEXPTIME-complete in the multiagent case, both for the deliberative and the Chellas STIT.
This paper is concerned with the first-order paraconsistent logic LPQ$^{supset,mathsf{F}}$. A sequent-style natural deduction proof system for this logic is presented and, for this proof system, both a model-theoretic justification and a logical justification by means of an embedding into first-order classical logic is given. For no logic that is essentially the same as LPQ$^{supset,mathsf{F}}$, a natural deduction proof system is currently available in the literature. The given embedding provides both a classical-logic explanation of this logic and a logical justification of its proof system. The major properties of LPQ$^{supset,mathsf{F}}$ are also treated.
In this paper we introduce a computational-level model of theory of mind (ToM) based on dynamic epistemic logic (DEL), and we analyze its computational complexity. The model is a special case of DEL model checking. We provide a parameterized complexity analysis, considering several aspects of DEL (e.g., number of agents, size of preconditions, etc.) as parameters. We show that model checking for DEL is PSPACE-hard, also when restricted to single-pointed models and S5 relations, thereby solving an open problem in the literature. Our approach is aimed at formalizing current intractability claims in the cognitive science literature regarding computational models of ToM.