استخراج العلاقات الشخصية تلقائيا من محاور المحادثة يمكن أن تثري قواعد المعرفة الشخصية لتعزيز البحث المخصص والتوتيات واللقات.لاستنتاج علاقات المتحدثين من الحوارات، نقترح فخر، وهو مصنف متعدد الملصقات العصبية، بناء على بيرتف ومحول لإنشاء تمثيل محادثة.يستخدم BRIDE هيكل الحوار ويزيده بالمعرفة الخارجية حول ميزات المتحدث ومصمم المحادثة. مثل الأعمال السابقة، نحن نعلم التنبؤ متعدد التسميات لعلاقات الحبيبات الجميلة.نطلق سراح مجموعات بيانات واسعة النطاق، بناء على ScreenPlays من الأفلام والعروض التلفزيونية، مع علاقات موجهة للمشاركين المحادثة.تظهر تجارب واسعة النطاق على كلتا البيانات الأداء فائقة من الفخر مقارنة بناسيات الأحدث.
Automatically extracting interpersonal relationships of conversation interlocutors can enrich personal knowledge bases to enhance personalized search, recommenders and chatbots. To infer speakers' relationships from dialogues we propose PRIDE, a neural multi-label classifier, based on BERT and Transformer for creating a conversation representation. PRIDE utilizes dialogue structure and augments it with external knowledge about speaker features and conversation style.Unlike prior works, we address multi-label prediction of fine-grained relationships. We release large-scale datasets, based on screenplays of movies and TV shows, with directed relationships of conversation participants. Extensive experiments on both datasets show superior performance of PRIDE compared to the state-of-the-art baselines.
References used
https://aclanthology.org/
Extracting structured information from medical conversations can reduce the documentation burden for doctors and help patients follow through with their care plan. In this paper, we introduce a novel task of extracting appointment spans from medical
Online platforms and communities establish their own norms that govern what behavior is acceptable within the community. Substantial effort in NLP has focused on identifying unacceptable behaviors and, recently, on forecasting them before they occur.
Current approaches to empathetic response generation focus on learning a model to predict an emotion label and generate a response based on this label and have achieved promising results. However, the emotion cause, an essential factor for empathetic
We present a method to support the annotation of head movements in video-recorded conversations. Head movement segments from annotated multimodal data are used to train a model to detect head movements in unseen data. The resulting predicted movement
This research aims to predict the level of air pollution with a set of data used to make predictions through them and to obtain the best prediction using several models and compare them and find the appropriate solution.