ترتيب الجملة هي مهمة ترتيب كيس معين من الجمل لتحقيق أقصى قدر من الاتساق النص العام.في هذا العمل، نقترح طريقة تدريبية بسيطة ولكنها فعالة تعمل على تحسين قدرة النماذج على التقاط تماسك النص العام بناء على التدريب على أزواج الجمل / القطاعات.تظهر النتائج التجريبية تفوق أسلوبنا المقترح في إعدادات المجال الواقعة.يتم التحقق من فائدة أسلوبنا أيضا عن مهمة ملخص متعددة المستندات.
Sentence ordering is the task of arranging a given bag of sentences so as to maximise the coherence of the overall text. In this work, we propose a simple yet effective training method that improves the capacity of models to capture overall text coherence based on training over pairs of sentences/segments. Experimental results show the superiority of our proposed method in in- and cross-domain settings. The utility of our method is also verified over a multi-document summarisation task.
References used
https://aclanthology.org/
Transformer architecture achieves great success in abundant natural language processing tasks. The over-parameterization of the Transformer model has motivated plenty of works to alleviate its overfitting for superior performances. With some explorat
In spite of the significant progress in surgery , the "best" surgical technique for
the management of complete rectal prolapse remains controversial, due to its low incidence, which is the
cause that there are no large prospective randomized trials
To audit the robustness of named entity recognition (NER) models, we propose RockNER, a simple yet effective method to create natural adversarial examples. Specifically, at the entity level, we replace target entities with other entities of the same
Data augmentation, which refers to manipulating the inputs (e.g., adding random noise,masking specific parts) to enlarge the dataset,has been widely adopted in machine learning. Most data augmentation techniques operate on a single input, which limit
Numeracy plays a key role in natural language understanding. However, existing NLP approaches, not only traditional word2vec approach or contextualized transformer-based language models, fail to learn numeracy. As the result, the performance of these