في حين أن طرازات اللغة المدربة مسبقا (PTLMS) حققت نجاحا ملحوظا في العديد من مهام NLP، إلا أنها ما زالوا يكافحون من أجل المهام التي تتطلب منطق الحدث الزمني، وهو أمر ضروري للتطبيقات المرن في الحدث. نقدم نهجا مستمرا مسبقا يزود PTLMS مع المعرفة المستهدفة حول العلاقات الزمنية للحدث. نقوم بتصميم أهداف التعلم ذات الإشراف الذاتي لاستعادة الحدث الملثمان والمؤشرات الزمنية وتمييز الأحكام من نظرائهم الفاسد (حيث تم استبدال الحدث أو المؤشرات الزمنية). بمزيد من التدريب مسبقا PTLM مع هذه الأهداف بشكل مشترك، نعزز انتباهها إلى الحدث والمعلومات الزمنية، مما أدى إلى تعزيز القدرة المعززة على المنطق الزمني للحدث. هذا ** e ** ffective ** con ** إطار ما قبل التدريب المعدني ** ه ** تنفيس ** T ** منطق Emporal (Econet) يحسن عروض الضبط الدقيقة PTLMS عبر خمسة استخراج العلاقات والسؤال وتحقق عروضا جديدة أو على قدم المساواة في معظم مهامنا المصب لدينا.
While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications. We present a continual pre-training approach that equips PTLMs with targeted knowledge about event temporal relations. We design self-supervised learning objectives to recover masked-out event and temporal indicators and to discriminate sentences from their corrupted counterparts (where event or temporal indicators got replaced). By further pre-training a PTLM with these objectives jointly, we reinforce its attention to event and temporal information, yielding enhanced capability on event temporal reasoning. This **E**ffective **CON**tinual pre-training framework for **E**vent **T**emporal reasoning (ECONET) improves the PTLMs' fine-tuning performances across five relation extraction and question answering tasks and achieves new or on-par state-of-the-art performances in most of our downstream tasks.
References used
https://aclanthology.org/
Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicit in text. In this work, we propose an ensemble model for temporal commonsense reasoning. Our model relies on pre-trained contextual representat
Detecting events and their evolution through time is a crucial task in natural language understanding. Recent neural approaches to event temporal relation extraction typically map events to embeddings in the Euclidean space and train a classifier to
Performance of neural models for named entity recognition degrades over time, becoming stale. This degradation is due to temporal drift, the change in our target variables' statistical properties over time. This issue is especially problematic for so
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large
Transformers-based pretrained language models achieve outstanding results in many well-known NLU benchmarks. However, while pretraining methods are very convenient, they are expensive in terms of time and resources. This calls for a study of the impa