النماذج العصبية العشوائية الشرطية (CRF) النماذج العصبية القائمة هي من بين أكثر طرق أداء لحل مشاكل وضع التسلسل.على الرغم من نجاحها الكبير، إلا أن CRF لديه القصور في توليد تسلسلات غير قانونية في بعض الأحيان، على سبيل المثالتسلسلات تحتوي على علامة I- '' مباشرة بعد علامة o ''، ممنوع من مخطط الوسم الحيوي الأساسي.في هذا العمل، نقترح حقل عشوائي مشروط ملثم (MCRF)، وسهل تنفيذ البديل CRF الذي يفرض قيودا على مسارات المرشحين خلال كل من مراحل التدريب وفك الشفرة.نظرا لأن الطريقة المقترحة يحل تماما هذه المشكلة وتجلب تحسنا كبيرا على النماذج القائمة على CRF الموجودة مع تكلفة إضافية بالقرب من الصفر.
Conditional Random Field (CRF) based neural models are among the most performant methods for solving sequence labeling problems. Despite its great success, CRF has the shortcoming of occasionally generating illegal sequences of tags, e.g. sequences containing an I-'' tag immediately after an O'' tag, which is forbidden by the underlying BIO tagging scheme. In this work, we propose Masked Conditional Random Field (MCRF), an easy to implement variant of CRF that impose restrictions on candidate paths during both training and decoding phases. We show that the proposed method thoroughly resolves this issue and brings significant improvement over existing CRF-based models with near zero additional cost.
References used
https://aclanthology.org/
In this article, we show and discuss our experience in applying the flipped classroom method for teaching Conditional Random Fields in a Natural Language Processing course. We present the activities that we developed together with their relationship
We investigate how sentence-level transformers can be modified into effective sequence labelers at the token level without any direct supervision. Existing approaches to zero-shot sequence labeling do not perform well when applied on transformer-base
Incorporating lexical knowledge into deep learning models has been proved to be very effective for sequence labeling tasks. However, previous works commonly have difficulty dealing with large-scale dynamic lexicons which often cause excessive matchin
Opinion target extraction and opinion term extraction are two fundamental tasks in Aspect Based Sentiment Analysis (ABSA). Many recent works on ABSA focus on Target-oriented Opinion Words (or Terms) Extraction (TOWE), which aims at extracting the cor
Sequence labeling aims to predict a fine-grained sequence of labels for the text. However, such formulation hinders the effectiveness of supervised methods due to the lack of token-level annotated data. This is exacerbated when we meet a diverse rang