نقدم نسخة ممتدة من الأداة التي وضعت لحساب المسافات اللغوية وغير المتكافئة في التصور السمعي للغات ذات الصلة عن كثب.جنبا إلى جنب مع تقييم المقاييس المتاحة في الإصدار الأولي من الأداة، نقدم Word Adaptation Enterpy كمقيدي إضافي من عدم التماثل اللغوي.يتم التحقق من صحة المتنبئين المحتملين من الوضوح من الكلام مع الأداء البشري في تجارب التعرف على المعترف بها من غير المنطوقة في البلغارية والروسية.يتم إيلاء اهتمام خاص لمساهمات مختلفة من الحرث الحروفية والساوجة في التقويم الشفوي.باستخدام Incom.py 2.0 من الممكن حساب وتصور وصلاحية أساليب قياس ثلاث طرق للمسافات اللغوية والمسافات اللغوية وكذلك تنفيذ تحليلات الانحدار في الوضوء الكلام بين اللغات ذات الصلة.
We present an extended version of a tool developed for calculating linguistic distances and asymmetries in auditory perception of closely related languages. Along with evaluating the metrics available in the initial version of the tool, we introduce word adaptation entropy as an additional metric of linguistic asymmetry. Potential predictors of speech intelligibility are validated with human performance in spoken cognate recognition experiments for Bulgarian and Russian. Special attention is paid to the possibly different contributions of vowels and consonants in oral intercomprehension. Using incom.py 2.0 it is possible to calculate, visualize, and validate three measurement methods of linguistic distances and asymmetries as well as carrying out regression analyses in speech intelligibility between related languages.
References used
https://aclanthology.org/
In this paper we compare the performance of three models: SGNS (skip-gram negative sampling) and augmented versions of SVD (singular value decomposition) and PPMI (Positive Pointwise Mutual Information) on a word similarity task. We particularly focu
The research aims to identify the effectiveness of linguistic
activities in the auditory discrimination skill development at a
sample of kindergarten children (5-6 years). And to achieve this
goal researcher followed a semi-experimental method, and used
two tools (linguistic activities program and auditory discrimination
test.
The most successful approach to Neural Machine Translation (NMT) when only monolingual training data is available, called unsupervised machine translation, is based on back-translation where noisy translations are generated to turn the task into a su
The SemLink resource provides mappings between a variety of lexical semantic ontologies, each with their strengths and weaknesses. To take advantage of these differences, the ability to move between resources is essential. This work describes advance
We ask subjects whether they perceive as human-produced a bunch of texts, some of which are actually human-written, while others are automatically generated. We use this data to fine-tune a GPT-2 model to push it to generate more human-like texts, an