تم دراسة تمثيلات الكلمات المخولة بمعلومات لغوية إضافية وأثبت أنها تتفوق على المدينات التقليدية. تركز الأساليب الحالية بشكل رئيسي على تضمينات التعلم للكلمات أثناء تضمينها من المعلومات اللغوية (المشار إليها باسم تضمين الحبوب) بعد التعلم. يقترح هذا العمل تضمين ميداني إطار لإمكانية التعلم بشكل مشترك لكل من وظائف الكلمة والحبوب من خلال دمج المجالات اللغوية المورفولوجية والصوتية والخلوية. يزعم الإطار أن خط أنابيب مبتكرة غرامة يدمج الحقول اللغوية المتعددة وتنتج تسلسلات الحبوب عالية الجودة لتعلم تمثيلات الكلمات العليا. تم تصميم خوارزمية رواية أيضا لتعلم المدينات للكلمات والحبوب عن طريق التقاط المعلومات الواردة داخل كل مجال ويتم تقاسمها عبرها. النتائج التجريبية للمهام المعجمية ومهام معالجة اللغة الطبيعية المصب توضح أن إطار عملنا يمكن أن يتعلم أفضل تضمين الوزراء وشرح الحبوب. التقييمات النوعية تظهر تضييق الحبوب التقاط المعلومات الدلالية بشكل فعال.
Word representations empowered with additional linguistic information have been widely studied and proved to outperform traditional embeddings. Current methods mainly focus on learning embeddings for words while embeddings of linguistic information (referred to as grain embeddings) are discarded after the learning. This work proposes a framework field embedding to jointly learn both word and grain embeddings by incorporating morphological, phonetic, and syntactical linguistic fields. The framework leverages an innovative fine-grained pipeline that integrates multiple linguistic fields and produces high-quality grain sequences for learning supreme word representations. A novel algorithm is also designed to learn embeddings for words and grains by capturing information that is contained within each field and that is shared across them. Experimental results of lexical tasks and downstream natural language processing tasks illustrate that our framework can learn better word embeddings and grain embeddings. Qualitative evaluations show grain embeddings effectively capture the semantic information.
References used
https://aclanthology.org/
Abstract Pre-trained language representation models (PLMs) cannot well capture factual knowledge from text. In contrast, knowledge embedding (KE) methods can effectively represent the relational facts in knowledge graphs (KGs) with informative entity
Various temporal knowledge graph (KG) completion models have been proposed in the recent literature. The models usually contain two parts, a temporal embedding layer and a score function derived from existing static KG modeling approaches. Since the
Toxic Spans Detection(TSD) task is defined as highlighting spans that make a text toxic. Many works have been done to classify a given comment or document as toxic or non-toxic. However, none of those proposed models work at the token level. In this
Neural language models, including transformer-based models, that are pre-trained on very large corpora became a common way to represent text in various tasks, including recognition of textual semantic relations, e.g. Cross-document Structure Theory.
In this work, we describe our efforts in improving the variety of language generated from a rule-based NLG system for automated journalism. We present two approaches: one based on inserting completely new words into sentences generated from templates