نماذج الموضوعات هي أدوات مفيدة لتحليل وتفسير المواضيع الأساسية الرئيسية للنص الكبير.تعتمد معظم نماذج الموضوعات على حدوث كلمة Word لحساب موضوع، أي مجموعة مرجحة من الكلمات التي تمثل معا مفهوم دلالي رفيع المستوى.في هذه الورقة، نقترح نموذجا جديدا جديدا مختلفا عن الخفيفة الوزن في الوزن (SNTM) يتعلم سياق غني من خلال تعلم تمثيل موضوعي بالاشتراك من ثلاثة كلمات مشتركة وثيقة تنشأ ثلاثية.تشير نتائجنا التجريبية إلى أن نموذج الموضوع العصبي المقترح لدينا، SNTM، يتفوق على نماذج الموضوعات الموجودة سابقا في مقاييس الاتساق بالإضافة إلى دقة تجميع المستندات.علاوة على ذلك، بصرف النظر عن تماسك الموضوع وأداء التجميع، فإن طراز الموضوع العصبي المقترح لديه عدد من المزايا، وهي، كونها فعالة بشكل حسابي وسهل التدريب.
Topic models are useful tools for analyzing and interpreting the main underlying themes of large corpora of text. Most topic models rely on word co-occurrence for computing a topic, i.e., a weighted set of words that together represent a high-level semantic concept. In this paper, we propose a new light-weight Self-Supervised Neural Topic Model (SNTM) that learns a rich context by learning a topic representation jointly from three co-occurring words and a document that the triple originates from. Our experimental results indicate that our proposed neural topic model, SNTM, outperforms previously existing topic models in coherence metrics as well as document clustering accuracy. Moreover, apart from the topic coherence and clustering performance, the proposed neural topic model has a number of advantages, namely, being computationally efficient and easy to train.
References used
https://aclanthology.org/
Neural topic models can augment or replace bag-of-words inputs with the learned representations of deep pre-trained transformer-based word prediction models. One added benefit when using representations from multilingual models is that they facilitat
Quality estimation (QE) of machine translation (MT) aims to evaluate the quality of machine-translated sentences without references and is important in practical applications of MT. Training QE models require massive parallel data with hand-crafted q
Short text nowadays has become a more fashionable form of text data, e.g., Twitter posts, news titles, and product reviews. Extracting semantic topics from short texts plays a significant role in a wide spectrum of NLP applications, and neural topic
Abstract Named Entity Recognition (NER) is a fundamental NLP task, commonly formulated as classification over a sequence of tokens. Morphologically rich languages (MRLs) pose a challenge to this basic formulation, as the boundaries of named entities
There is a shortage of high-quality corpora for South-Slavic languages. Such corpora are useful to computer scientists and researchers in social sciences and humanities alike, focusing on numerous linguistic, content analysis, and natural language pr