نقدم تحسين الحالة المخفية (HSO)، وهي طريقة قائمة على التدرج لتحسين أداء نماذج لغة المحولات في وقت الاستدلال.على غرار التقييم الديناميكي (Krause et al.، 2018)، يقوم HSO بتحسين التدرج على احتمال تسجيل الدخول يعين نموذج اللغة لنص التقييم، ولكنه يستخدمه لتحديث الدول المخففة المخزنة مؤقتا بدلا من المعلمات النموذجية.نقوم باختبار HSO مع نماذج لغة محول XL و GPT-2، وإيجاد تحسن على مجموعات بيانات Wikitext-103 و PG-19 من حيث الحيرة، خاصة عند تقييم نموذج خارج توزيع التدريب الخاص به.نحن نوضح أيضا إمكانية تطبيق المصب من خلال إظهار المكاسب في إعداد تقييم القليل من القليل من القليل من القليل من الطوابق المتقدما مؤخرا، مرة أخرى دون أي معلمات إضافية أو بيانات تدريبية.
We present Hidden-State Optimization (HSO), a gradient-based method for improving the performance of transformer language models at inference time. Similar to dynamic evaluation (Krause et al., 2018), HSO computes the gradient of the log-probability the language model assigns to an evaluation text, but uses it to update the cached hidden states rather than the model parameters. We test HSO with pretrained Transformer-XL and GPT-2 language models, finding improvement on the WikiText-103 and PG-19 datasets in terms of perplexity, especially when evaluating a model outside of its training distribution. We also demonstrate downstream applicability by showing gains in the recently developed prompt-based few-shot evaluation setting, again with no extra parameters or training data.
References used
https://aclanthology.org/
An abundance of methodological work aims to detect hateful and racist language in text. However, these tools are hampered by problems like low annotator agreement and remain largely disconnected from theoretical work on race and racism in the social
Recent progress in language modeling has been driven not only by advances in neural architectures, but also through hardware and optimization improvements. In this paper, we revisit the neural probabilistic language model (NPLM) of Bengio et al. (200
Paraphrase generation has benefited extensively from recent progress in the designing of training objectives and model architectures. However, previous explorations have largely focused on supervised methods, which require a large amount of labeled d
The factual knowledge acquired during pre-training and stored in the parameters of Language Models (LMs) can be useful in downstream tasks (e.g., question answering or textual inference). However, some facts can be incorrectly induced or become obsol
Paraphrases refer to texts that convey the same meaning with different expression forms. Pivot-based methods, also known as the round-trip translation, have shown promising results in generating high-quality paraphrases. However, existing pivot-based