تبين أن قواعد النحوية الخالية من السياق (PCFGS) مع المعلمة العصبية فعالة في تحريض قواعد العبارات غير المعروضة.ومع ذلك، نظرا للتعقيد المركزي المكعبي لتمثيل PCFG وتحليله، فإن النهج السابقة لا يمكن أن توسيع نطاق عدد كبير نسبيا من الرموز (غير اللامعة والأعمدة).في هذا العمل، نقدم شكل معلمات جديد من PCFGS استنادا إلى تحلل تربوت، والذي يحتوي على تعقيد حسابي ثلاثي في رقم الرمز، وبالتالي يسمح لنا باستخدام عدد أكبر بكثير من الرموز.نحن نستخدم أيضا المعلمة العصبية للنموذج الجديد لتحسين أداء تحليل غير مخالف.نقيم نموذجنا عبر عشرة لغات وإظهار تجريبيا فعالية استخدام المزيد من الرموز.
Probabilistic context-free grammars (PCFGs) with neural parameterization have been shown to be effective in unsupervised phrase-structure grammar induction. However, due to the cubic computational complexity of PCFG representation and parsing, previous approaches cannot scale up to a relatively large number of (nonterminal and preterminal) symbols. In this work, we present a new parameterization form of PCFGs based on tensor decomposition, which has at most quadratic computational complexity in the symbol number and therefore allows us to use a much larger number of symbols. We further use neural parameterization for the new form to improve unsupervised parsing performance. We evaluate our model across ten languages and empirically demonstrate the effectiveness of using more symbols.
References used
https://aclanthology.org/
Question answering (QA) models for reading comprehension have been demonstrated to exploit unintended dataset biases such as question--context lexical overlap. This hinders QA models from generalizing to under-represented samples such as questions wi
We present the first supertagging-based parser for linear context-free rewriting systems (LCFRS). It utilizes neural classifiers and outperforms previous LCFRS-based parsers in both accuracy and parsing speed by a wide margin. Our results keep up wit
Training NLP systems typically assumes access to annotated data that has a single human label per example. Given imperfect labeling from annotators and inherent ambiguity of language, we hypothesize that single label is not sufficient to learn the sp
Stereotypical character roles-also known as archetypes or dramatis personae-play an important function in narratives: they facilitate efficient communication with bundles of default characteristics and associations and ease understanding of those cha
Latent alignment objectives such as CTC and AXE significantly improve non-autoregressive machine translation models. Can they improve autoregressive models as well? We explore the possibility of training autoregressive machine translation models with