يمكن أن تكون أنظمة NLP المستندة إلى التعلم العميق حساسة للرموز غير المرئية ويصعب التعلم مع المدخلات عالية الأبعاد التي تعيق التعلم بشكل خطير.نقدم نهجا من خلال تجميع كلمات الإدخال على أساس التنوع الدلالي الخاص بهم لتبسيط تمثيل لغة الإدخال مع غموض منخفض.نظرا لأن الكلمات المتنوعة الدلوية موجودة في سياقات مختلفة، فإننا قادرون على استبدال الكلمات مع مجموعاتهم وما زالت تميز معاني الكلمة التي تعتمد على سياقاتها.نقوم بتصميم العديد من الخوارزميات التي تحسب تجمعات متنوعة تستند إلى أخذ العينات العشوائية، مسافات هندسية، وتعظيم انتروبيا، ونثبت ضمانات رسمية للخوارزميات القائمة على الانتروبوي.تظهر النتائج التجريبية أن أساليبنا تعمم طرازات NLP وإظهار الدقة المعززة على وضع علامات نقاط البيع ومهام LM وتحسينات كبيرة على مهام الترجمة الآلية المتوسطة الحجم، ما يصل إلى +6.5 نقطة بلو.يتوفر شفرة المصدر لدينا في https://github.com/abdulrafae/dg.
Deep Learning-based NLP systems can be sensitive to unseen tokens and hard to learn with high-dimensional inputs, which critically hinder learning generalization. We introduce an approach by grouping input words based on their semantic diversity to simplify input language representation with low ambiguity. Since the semantically diverse words reside in different contexts, we are able to substitute words with their groups and still distinguish word meanings relying on their contexts. We design several algorithms that compute diverse groupings based on random sampling, geometric distances, and entropy maximization, and we prove formal guarantees for the entropy-based algorithms. Experimental results show that our methods generalize NLP models and demonstrate enhanced accuracy on POS tagging and LM tasks and significant improvements on medium-scale machine translation tasks, up to +6.5 BLEU points. Our source code is available at https://github.com/abdulrafae/dg.
References used
https://aclanthology.org/
The task of semantic code search is to retrieve code snippets from a source code corpus based on an information need expressed in natural language. The semantic gap between natural language and programming languages has for long been regarded as one
Word segmentation, the problem of finding word boundaries in speech, is of interest for a range of tasks. Previous papers have suggested that for sequence-to-sequence models trained on tasks such as speech translation or speech recognition, attention
Dialogue systems pretrained with large language models generate locally coherent responses, but lack fine-grained control over responses necessary to achieve specific goals. A promising method to control response generation is exemplar-based generati
The benchmark performance of cross-database semantic parsing has climbed steadily in recent years, catalyzed by the wide adoption of pre-trained language models. Yet existing work have shown that state-of-the-art cross-database semantic parsers strug
In this paper, we proposed a BERT-based dimensional semantic analyzer, which is designed by incorporating with word-level information. Our model achieved three of the best results in four metrics on ROCLING 2021 Shared Task: Dimensional Sentiment Ana