تركز معظم مهام الإجابة على معظم الأسئلة على التنبؤ بإجابات ملموسة، مثل الكيانات المسماة.يمكن تحقيق هذه المهام عادة عن طريق فهم السياقات دون وجود معلومات إضافية مطلوبة.في قراءة الفهم من المهمة المعنى التجريدي (إعادة التقييم)، يتم تقديم الإجابات المجردة.لفهم معاني مجردة في السياق، المعرفة الإضافية ضرورية.في هذه الورقة، نقترح نهج يهدف إلى أن يشرف رصيد بيرت المدرب مسبقا كموارد معرفة مسبقة.وفقا للنتائج، فإن نهجنا باستخدام بيرت المدربة مسبقا تفوقت على الأساس.إنه يدل على أنه يمكن استخدام Abeddings Token Bertken المدربة مسبقا كمعرفة إضافية لفهم المعاني المجردة في الإجابة على الأسئلة.
Most question answering tasks focuses on predicting concrete answers, e.g., named entities. These tasks can be normally achieved by understanding the contexts without additional information required. In Reading Comprehension of Abstract Meaning (ReCAM) task, the abstract answers are introduced. To understand abstract meanings in the context, additional knowledge is essential. In this paper, we propose an approach that leverages the pre-trained BERT Token embeddings as a prior knowledge resource. According to the results, our approach using the pre-trained BERT outperformed the baselines. It shows that the pre-trained BERT token embeddings can be used as additional knowledge for understanding abstract meanings in question answering.
References used
https://aclanthology.org/
This paper introduces the SemEval-2021 shared task 4: Reading Comprehension of Abstract Meaning (ReCAM). This shared task is designed to help evaluate the ability of machines in representing and understanding abstract concepts.Given a passage and the
This paper describes the winning system for subtask 2 and the second-placed system for subtask 1 in SemEval 2021 Task 4: ReadingComprehension of Abstract Meaning. We propose to use pre-trianed Electra discriminator to choose the best abstract word fr
This paper describes our system for Task 4 of SemEval-2021: Reading Comprehension of Abstract Meaning (ReCAM). We participated in all subtasks where the main goal was to predict an abstract word missing from a statement. We fine-tuned the pre-trained
This paper describes our system for SemEval-2021 Task 4: Reading Comprehension of Abstract Meaning. To accomplish this task, we utilize the Knowledge-Enhanced Graph Attention Network (KEGAT) architecture with a novel semantic space transformation str
Predicting the complexity level of a word or a phrase is considered a challenging task. It is even recognized as a crucial step in numerous NLP applications, such as text rearrangements and text simplification. Early research treated the task as a bi