منطق العموم الزمني هي مهمة صعبة لأنها تتطلب المعرفة الزمنية عادة غير صريحة في النص.في هذا العمل، نقترح نموذج فرقة لسبب المنظمات الزمنية.يعتمد نموذجنا على تمثيلات سياقية مدربة مسبقا من نماذج اللغة القائمة على المحولات (IE، Bert)، وعلى مجموعة متنوعة من طرق التدريب لتعزيز تعميم النموذج: 1) ضبط غرامة متعددة الخطوات باستخدام المهام العاطفية المحددة بعناية ومجموعات البيانات، و2) مهمة نموذجية مصممة مصممة خصيصا له مهمة تهدف إلى التقاط معرفة العمليات الزمنية.يتفوق نموذجنا إلى حد كبير على نهج ضبط الدقيقة القياسية والقواعد الأساسية القوية على DataSet MC-Taco.
Temporal commonsense reasoning is a challenging task as it requires temporal knowledge usually not explicit in text. In this work, we propose an ensemble model for temporal commonsense reasoning. Our model relies on pre-trained contextual representations from transformer-based language models (i.e., BERT), and on a variety of training methods for enhancing model generalization: 1) multi-step fine-tuning using carefully selected auxiliary tasks and datasets, and 2) a specifically designed temporal masked language model task aimed to capture temporal commonsense knowledge. Our model greatly outperforms the standard fine-tuning approach and strong baselines on the MC-TACO dataset.
References used
https://aclanthology.org/
While pre-trained language models (PTLMs) have achieved noticeable success on many NLP tasks, they still struggle for tasks that require event temporal reasoning, which is essential for event-centric applications. We present a continual pre-training
Current commonsense reasoning research focuses on developing models that use commonsense knowledge to answer multiple-choice questions. However, systems designed to answer multiple-choice questions may not be useful in applications that do not provid
Commonsense inference to understand and explain human language is a fundamental research problem in natural language processing. Explaining human conversations poses a great challenge as it requires contextual understanding, planning, inference, and
Recent methods based on pre-trained language models have shown strong supervised performance on commonsense reasoning. However, they rely on expensive data annotation and time-consuming training. Thus, we focus on unsupervised commonsense reasoning.
Large-scale, pre-trained language models (LMs) have achieved human-level performance on a breadth of language understanding tasks. However, evaluations only based on end task performance shed little light on machines' true ability in language underst