تأثرت الكشف عن الحدث منذ فترة طويلة بسبب لعنة الزناد: التجاوز الزنجي سيضر بالقدرة على مستوى التعميم أثناء تقديره سيضر بأداء الكشف.هذه المشكلة أكثر حدة في سيناريو أقل لقطة.في هذه الورقة، نحدد وحل مشكلة لعنة المشغل في اكتشاف حدث قليل الطواف (FSED) من وجهة نظر سببية.من خلال صياغة FSED مع نموذج سببي هيكلي (SCM)، وجدنا أن الزناد هو مواجهة السياق والنتيجة، مما يجعل أساليب FSED السابقة أسهل بكثير على المشغلات المبكرة.لحل هذه المشكلة، نقترح التدخل في السياق عن طريق تعديل الخلفية أثناء التدريب.تبين التجارب أن طريقتنا تحسن بشكل كبير من FSED على كل من مجموعة بيانات ACE05 و Maven.
Event detection has long been troubled by the trigger curse: overfitting the trigger will harm the generalization ability while underfitting it will hurt the detection performance. This problem is even more severe in few-shot scenario. In this paper, we identify and solve the trigger curse problem in few-shot event detection (FSED) from a causal view. By formulating FSED with a structural causal model (SCM), we found that the trigger is a confounder of the context and the result, which makes previous FSED methods much easier to overfit triggers. To resolve this problem, we propose to intervene on the context via backdoor adjustment during training. Experiments show that our method significantly improves the FSED on both ACE05 and MAVEN datasets.
References used
https://aclanthology.org/
We address the sampling bias and outlier issues in few-shot learning for event detection, a subtask of information extraction. We propose to model the relations between training tasks in episodic few-shot learning by introducing cross-task prototypes
In this work, we focus on a more challenging few-shot intent detection scenario where many intents are fine-grained and semantically similar. We present a simple yet effective few-shot intent detection schema via contrastive pre-training and fine-tun
Event detection (ED) aims at identifying event instances of specified types in given texts, which has been formalized as a sequence labeling task. As far as we know, existing neural-based ED models make decisions relying entirely on the contextual se
The task of Event Detection (ED) in Information Extraction aims to recognize and classify trigger words of events in text. The recent progress has featured advanced transformer-based language models (e.g., BERT) as a critical component in state-of-th
We propose ConVEx (Conversational Value Extractor), an efficient pretraining and fine-tuning neural approach for slot-labeling dialog tasks. Instead of relying on more general pretraining objectives from prior work (e.g., language modeling, response