ﻻ يوجد ملخص باللغة العربية
Natural language inference requires reasoning about contradictions, negations, and their commonsense implications. Given a simple premise (e.g., Im mad at you), humans can reason about the varying shades of contradictory statements ranging from straightforward negations (Im not mad at you) to commonsense contradictions (Im happy). Moreover, these negated or contradictory statements shift the commonsense implications of the original premise in nontrivial ways. For example, while Im mad implies Im unhappy about something, negating the premise (i.e., Im not mad) does not necessarily negate the corresponding commonsense implications. In this paper, we present the first comprehensive study focusing on commonsense implications of negated statements and contradictions. We introduce ANION1, a new commonsense knowledge graph with 624K if-then rules focusing on negated and contradictory events. We then present joint generative and discriminative inference models for this new resource, providing novel empirical insights on how logical negations and commonsense contradictions reshape the commonsense implications of their original premises.
With the increasing empirical success of distributional models of compositional semantics, it is timely to consider the types of textual logic that such models are capable of capturing. In this paper, we address shortcomings in the ability of current
Negation is a core construction in natural language. Despite being very successful on many tasks, state-of-the-art pre-trained language models often handle negation incorrectly. To improve language models in this regard, we propose to augment the lan
Negation in natural language does not follow Boolean logic and is therefore inherently difficult to model. In particular, it takes into account the broader understanding of what is being negated. In previous work, we proposed a framework for negation
The design and implementation of parallel algorithms is a fundamental task in computer algebra. Combining the computer algebra system Singular and the workflow management system GPI-Space, we have developed an infrastructure for massively parallel co
BERT has been used for solving commonsense tasks such as CommonsenseQA. While prior research has found that BERT does contain commonsense information to some extent, there has been work showing that pre-trained models can rely on spurious association