يوضح هذا العمل عملية تطوير بنية تعلم الآلة للاستدلال الذي يمكن أن يتجاوز حجم كبير من الطلبات.استخدمنا نموذج بيرت الذي كان يركض بشكل جيد لتحليل العاطفة، وإرجاع توزيع احتمالية للعواطف بالنظر إلى فقرة.تم نشر النموذج كخدمة GRPC على KUBERNNTES.تم استخدام Apache Spark لأداء الاستدلال على دفعات عن طريق استدعاء الخدمة.واجهنا بعض تحديات الأداء والتزامن وإنشاء حلول لتحقيق وقت التشغيل بشكل أسرع.بدءا من 200 طلب استنتاج ناجح في الدقيقة، تمكنا من تحقيق ما يصل إلى 18 ألف طلب ناجح في الدقيقة مع نفس تخصيص الموارد الوظيفية الدفاعية.نتيجة لذلك، نجحنا في تخزين احتمالات العاطفة لمدة 95 مليون فقرات في غضون 96 ساعة.
This work demonstrates the development process of a machine learning architecture for inference that can scale to a large volume of requests. We used a BERT model that was fine-tuned for emotion analysis, returning a probability distribution of emotions given a paragraph. The model was deployed as a gRPC service on Kubernetes. Apache Spark was used to perform inference in batches by calling the service. We encountered some performance and concurrency challenges and created solutions to achieve faster running time. Starting with 200 successful inference requests per minute, we were able to achieve as high as 18 thousand successful requests per minute with the same batch job resource allocation. As a result, we successfully stored emotion probabilities for 95 million paragraphs within 96 hours.
References used
https://aclanthology.org/
Abstract Pre-trained Transformer-based models have achieved state-of-the-art performance for various Natural Language Processing (NLP) tasks. However, these models often have billions of parameters, and thus are too resource- hungry and computation-i
The introduction of transformer-based language models has been a revolutionary step for natural language processing (NLP) research. These models, such as BERT, GPT and ELECTRA, led to state-of-the-art performance in many NLP tasks. Most of these mode
We present the ongoing NorLM initiative to support the creation and use of very large contextualised language models for Norwegian (and in principle other Nordic languages), including a ready-to-use software environment, as well as an experience repo
We probe pre-trained transformer language models for bridging inference. We first investigate individual attention heads in BERT and observe that attention heads at higher layers prominently focus on bridging relations in-comparison with the lower an
The embedding-based large-scale query-document retrieval problem is a hot topic in the information retrieval (IR) field. Considering that pre-trained language models like BERT have achieved great success in a wide variety of NLP tasks, we present a Q