في الآونة الأخيرة، تم تحقيق أداء مثير للإعجاب على مختلف مهام فهم اللغة الطبيعية من خلال دمج بناء الجملة والمعلومات الدلالية في النماذج المدربة مسبقا، مثل بيرت وروبرتا.ومع ذلك، يعتمد هذا النهج على ضبط النماذج الدقيقة الخاصة بالمشكلات، وعلى نطاق واسع، تظهر نماذج BERT-يشبئون الأداء، وهي غير فعالة، عند تطبيقها على مهام مقارنة التشابه غير المدعومة.تم اقتراح الحكم - بيرت (SBERT) كطريقة تضمين عقوبة عامة للأغراض العامة، مناسبة لكل من مقارنة التشابه والمهام المصب.في هذا العمل، نظهر أنه من خلال دمج المعلومات الهيكلية في SBERT، فإن النموذج الناتج يتفوق على SBERTT وتميز الجملة العامة السابقة على مجموعات بيانات التشابه الدلالي غير المنصوص عليها ومهام تصنيف النقل.
Recently, impressive performance on various natural language understanding tasks has been achieved by explicitly incorporating syntax and semantic information into pre-trained models, such as BERT and RoBERTa. However, this approach depends on problem-specific fine-tuning, and as widely noted, BERT-like models exhibit weak performance, and are inefficient, when applied to unsupervised similarity comparison tasks. Sentence-BERT (SBERT) has been proposed as a general-purpose sentence embedding method, suited to both similarity comparison and downstream tasks. In this work, we show that by incorporating structural information into SBERT, the resulting model outperforms SBERT and previous general sentence encoders on unsupervised semantic textual similarity (STS) datasets and transfer classification tasks.
References used
https://aclanthology.org/
Due to the development of deep learning, the natural language processing tasks have made great progresses by leveraging the bidirectional encoder representations from Transformers (BERT). The goal of information retrieval is to search the most releva
Predicting linearized Abstract Meaning Representation (AMR) graphs using pre-trained sequence-to-sequence Transformer models has recently led to large improvements on AMR parsing benchmarks. These parsers are simple and avoid explicit modeling of str
Rumor detection on social media puts pre-trained language models (LMs), such as BERT, and auxiliary features, such as comments, into use. However, on the one hand, rumor detection datasets in Chinese companies with comments are rare; on the other han
Sentence splitting involves the segmentation of a sentence into two or more shorter sentences. It is a key component of sentence simplification, has been shown to help human comprehension and is a useful preprocessing step for NLP tasks such as summa
Natural language inference is a method of finding inferences in language texts. Understanding the meaning of a sentence and its inference is essential in many language processing applications. In this context, we consider the inference problem for a