وجدت أنظمة توليد النص المختلط من التعليمات البرمجية قد وجدت تطبيقات في العديد من المهام المصب، بما في ذلك التعرف على الكلام والترجمة والحوار.تعتمد نموذج أنظمة الجيل هذه على نظريات النحوية المحددة جيدا من خلط التعليمات البرمجية، وهناك نقص في مقارنة هذه النظريات.نقدم تقييم بشري واسع النطاق لنظريات نحوية شعبية وشعبية لغة ماتريكس (ML) وقيد التكافؤ (EC).قارناها ضد ثلاث نماذج قائمة على أساسها وتظهر كمية فعالية فعالية نظريتين نحوي.
Code-mixed text generation systems have found applications in many downstream tasks, including speech recognition, translation and dialogue. A paradigm of these generation systems relies on well-defined grammatical theories of code-mixing, and there is a lack of comparison of these theories. We present a large-scale human evaluation of two popular grammatical theories, Matrix-Embedded Language (ML) and Equivalence Constraint (EC). We compare them against three heuristic-based models and quantitatively demonstrate the effectiveness of the two grammatical theories.
References used
https://aclanthology.org/
Software developers write a lot of source code and documentation during software development. Intrinsically, developers often recall parts of source code or code summaries that they had written in the past while implementing software or documenting t
A major challenge in analysing social me-dia data belonging to languages that use non-English script is its code-mixed nature. Recentresearch has presented state-of-the-art contex-tual embedding models (both monolingual s.a.BERT and multilingual s.a.
Language models used in speech recognition are often either evaluated intrinsically using perplexity on test data, or extrinsically with an automatic speech recognition (ASR) system. The former evaluation does not always correlate well with ASR perfo
Text generation is a highly active area of research in the computational linguistic community. The evaluation of the generated text is a challenging task and multiple theories and metrics have been proposed over the years. Unfortunately, text generat
We present CoTexT, a pre-trained, transformer-based encoder-decoder model that learns the representative context between natural language (NL) and programming language (PL). Using self-supervision, CoTexT is pre-trained on large programming language