ﻻ يوجد ملخص باللغة العربية
The WiC task has attracted considerable attention in the NLP community, as demonstrated by the popularity of the recent MCL-WiC SemEval task. WSD systems and lexical resources have been used for the WiC task, as well as for WiC dataset construction. TSV is another task related to both WiC and WSD. We aim to establish the exact relationship between WiC, TSV, and WSD. We demonstrate that these semantic classification problems can be pairwise reduced to each other, and so they are theoretically equivalent. We analyze the existing WiC datasets to validate this equivalence hypothesis. We conclude that our understanding of semantic tasks can be increased through the applications of tools from theoretical computer science. Our findings also suggests that more efficient and simpler methods for one of these tasks could be successfully applied in the other two.
This paper describes the LIAAD system that was ranked second place in the Word-in-Context challenge (WiC) featured in SemDeep-5. Our solution is based on a novel system for Word Sense Disambiguation (WSD) using contextual embeddings and full-inventor
Semantic parsing maps natural language (NL) utterances into logical forms (LFs), which underpins many advanced NLP problems. Semantic parsers gain performance boosts with deep neural networks, but inherit vulnerabilities against adversarial examples.
The Winograd Schema Challenge (WSC) and variants inspired by it have become important benchmarks for common-sense reasoning (CSR). Model performance on the WSC has quickly progressed from chance-level to near-human using neural language models traine
This paper introduces a novel method for the representation of images that is semantic by nature, addressing the question of computation intelligibility in computer vision tasks. More specifically, our proposition is to introduce what we call a seman
Many pairwise classification tasks, such as paraphrase detection and open-domain question answering, naturally have extreme label imbalance (e.g., $99.99%$ of examples are negatives). In contrast, many recent datasets heuristically choose examples to