اكتسبت أنظمة تلخيص الجماع العصبي تقدما كبيرا في السنوات الأخيرة.ومع ذلك، غالبا ما تنتج تلخيص التلوث في كثير من الأحيان بيانات غير متناسقة أو حقائق كاذبة.كيفية توليد الملخصات التجريدية بشكل كبير تلقائيافي هذه الورقة، اقترحنا نهجا فعالا معزز بيانات تكبير البيانات الفعالة لتشكيل مجموعة بيانات الاتساق الواقعية.بناء على مجموعة البيانات الاصطناعية، ندرب نموذجا للتقييم التي لا يمكن أن تجعل تمييز التناسق الواقعي الدقيق والقوي فحسب، بل قادرا أيضا على جعل الأخطاء الواقعية القابلة للتفسير تتبعها توزيع التدرج السابق على توزيع الرمز المميز.توضح إجراء التجارب والتحليل في ملخصات التلخيص المشروح العام ومجموعات بيانات الاتساق واقعية نهجنا فعال ومعقول.
Neural abstractive summarization systems have gained significant progress in recent years. However, abstractive summarization often produce inconsisitent statements or false facts. How to automatically generate highly abstract yet factually correct summaries? In this paper, we proposed an efficient weak-supervised adversarial data augmentation approach to form the factual consistency dataset. Based on the artificial dataset, we train an evaluation model that can not only make accurate and robust factual consistency discrimination but is also capable of making interpretable factual errors tracing by backpropagated gradient distribution on token embeddings. Experiments and analysis conduct on public annotated summarization and factual consistency datasets demonstrate our approach effective and reasonable.
References used
https://aclanthology.org/
Automatic abstractive summaries are found to often distort or fabricate facts in the article. This inconsistency between summary and original text has seriously impacted its applicability. We propose a fact-aware summarization model FASum to extract
We propose the first general-purpose gradient-based adversarial attack against transformer models. Instead of searching for a single adversarial example, we search for a distribution of adversarial examples parameterized by a continuous-valued matrix
Factual inconsistencies existed in the output of abstractive summarization models with original documents are frequently presented. Fact consistency assessment requires the reasoning capability to find subtle clues to identify whether a model-generat
Low-resource Relation Extraction (LRE) aims to extract relation facts from limited labeled corpora when human annotation is scarce. Existing works either utilize self-training scheme to generate pseudo labels that will cause the gradual drift problem
Nonlinear conjugate gradient (CG) method holds an important role in solving large-scale unconstrained optimization problems. In this paper, we suggest a new modification of CG coefficient �� that satisfies sufficient descent condition and possesses global convergence property under strong Wolfe line search. The numerical results show that our new method is more efficient compared with other CG formulas tested.