ﻻ يوجد ملخص باللغة العربية
In Named Entity Recognition (NER), pre-trained language models have been overestimated by focusing on dataset biases to solve current benchmark datasets. However, these biases hinder generalizability which is necessary to address real-world situations such as weak name regularity and plenty of unseen mentions. To alleviate the use of dataset biases and make the models fully exploit data, we propose a debiasing method that our bias-only model can be replaced with a Pointwise Mutual Information (PMI) to enhance generalization ability while outperforming an in-domain performance. Our approach enables to debias highly correlated word and labels in the benchmark datasets; reflect informative statistics via subword frequency; alleviates a class imbalance between positive and negative examples. For long-named and complex-structure entities, our method can predict these entities through debiasing on conjunction or special characters. Extensive experiments on both general and biomedical domains demonstrate the effectiveness and generalization capabilities of the PMI.
We take a step towards addressing the under-representation of the African continent in NLP research by creating the first large publicly available high-quality dataset for named entity recognition (NER) in ten African languages, bringing together a v
Cross-domain named entity recognition (NER) models are able to cope with the scarcity issue of NER samples in target domains. However, most of the existing NER benchmarks lack domain-specialized entity types or do not focus on a certain domain, leadi
Recognizing named entities (NEs) is commonly conducted as a classification problem that predicts a class tag for an NE candidate in a sentence. In shallow structures, categorized features are weighted to support the prediction. Recent developments in
We study learning named entity recognizers in the presence of missing entity annotations. We approach this setting as tagging with latent variables and propose a novel loss, the Expected Entity Ratio, to learn models in the presence of systematically
Named entity recognition (NER) is a well-studied task in natural language processing. However, the widely-used sequence labeling framework is difficult to detect entities with nested structures. In this work, we view nested NER as constituency parsin