أظهرت نماذج محولات محول مسبقا واسعة النطاق أداء حديثة (SOTA) في مجموعة متنوعة من مهام NLP.في الوقت الحاضر، تتوفر العديد من النماذج المحددة مسبقا في النكهات النموذجية المختلفة ولغات مختلفة، ويمكن تكييفها بسهولة مع المهمة المصب الأولى.ومع ذلك، فإن عدد محدود فقط من النماذج متاحة لمهام الحوار، وخاصة مهام الحوار الموجهة نحو الأهداف.بالإضافة إلى ذلك، يتم تدريب النماذج المحددة مسبقا على لغة المجال العامة، مما يخلق عدم تطابقا بين لغة المحترفين ومجال المصب المصب.في هذه المساهمة، نقدم CS-Bert، نموذج BERT مسبقا على ملايين الحوارات في مجال خدمة العملاء.نقوم بتقييم CS-Bert على العديد من مهام حوار خدمة العملاء في العديد من مهام خدمة العملاء، وإظهار أن محالقنا في المجال لدينا مفيد مقارنة بالنماذج الأخرى المحددة مسبقا في كل من التجارب الصفري بالرصاص وكذلك في التجارب الصفرية، خاصة في إعداد بيانات منخفض الموارد.
Large-scale pretrained transformer models have demonstrated state-of-the-art (SOTA) performance in a variety of NLP tasks. Nowadays, numerous pretrained models are available in different model flavors and different languages, and can be easily adapted to one's downstream task. However, only a limited number of models are available for dialogue tasks, and in particular, goal-oriented dialogue tasks. In addition, the available pretrained models are trained on general domain language, creating a mismatch between the pretraining language and the downstream domain launguage. In this contribution, we present CS-BERT, a BERT model pretrained on millions of dialogues in the customer service domain. We evaluate CS-BERT on several downstream customer service dialogue tasks, and demonstrate that our in-domain pretraining is advantageous compared to other pretrained models in both zero-shot experiments as well as in finetuning experiments, especially in a low-resource data setting.
References used
https://aclanthology.org/
In a typical customer service chat scenario, customers contact a support center to ask for help or raise complaints, and human agents try to solve the issues. In most cases, at the end of the conversation, agents are asked to write a short summary em
Dialogue summarization has drawn much attention recently. Especially in the customer service domain, agents could use dialogue summaries to help boost their works by quickly knowing customer's issues and service progress. These applications require s
In online domain-specific customer service applications, many companies struggle to deploy advanced NLP models successfully, due to the limited availability of and noise in their datasets. While prior research demonstrated the potential of migrating
The research security service, one of the key issues that contribute to the
construction of the purchase decision at the client, and enhance customer
satisfaction and building long-term relationship with him, including contributing to
the achievem
Humor detection has become a topic of interest for several research teams, especially those involved in socio-psychological studies, with the aim to detect the humor and the temper of a targeted population (e.g. a community, a city, a country, the em