نحن نتطلع إلى اختناق بيانات التوضيحية لتصنيف التسلسل.على وجه التحديد نسأل السؤال: إذا كان لدى المرء ميزانية التوضيحية N، ما هي العينات التي يجب أن نختارها للتعليق التوضيحي؟الحل الذي نقترحه يبحث عن التنوع في العينة المحددة، من خلال تعظيم كمية المعلومات المفيدة لخوارزمية التعلم، أو معادل عن طريق تقليل التكرار من العينات في الاختيار.يتم صياغة هذا في سياق التعلم الطيفي للوظائف المتكررة لتصنيف التسلسل.تمثل طريقةنا البيانات غير المسبقة في شكل مصفوفة Hankel، وتستخدم فكرة الحجم الطيفي الأقصى للعثور على كتلة فرعية مضغوطة يتم رسم عينات التعليق التوضيحي.تؤكد التجارب المعنية بتصنيف التسلسل أن استراتيجية أخذ العينات الطيفية لدينا هي في الواقع فعالة وتجسد نماذج جيدة.
We address the annotation data bottleneck for sequence classification. Specifically we ask the question: if one has a budget of N annotations, which samples should we select for annotation? The solution we propose looks for diversity in the selected sample, by maximizing the amount of information that is useful for the learning algorithm, or equivalently by minimizing the redundancy of samples in the selection. This is formulated in the context of spectral learning of recurrent functions for sequence classification. Our method represents unlabeled data in the form of a Hankel matrix, and uses the notion of spectral max-volume to find a compact sub-block from which annotation samples are drawn. Experiments on sequence classification confirm that our spectral sampling strategy is in fact efficient and yields good models.
References used
https://aclanthology.org/
Multilingual pretrained language models are rapidly gaining popularity in NLP systems for non-English languages. Most of these models feature an important corpus sampling step in the process of accumulating training data in different languages, to en
In most of neural machine translation distillation or stealing scenarios, the highest-scoring hypothesis of the target model (teacher) is used to train a new model (student). If reference translations are also available, then better hypotheses (with
In image captioning, multiple captions are often provided as ground truths, since a valid caption is not always uniquely determined. Conventional methods randomly select a single caption and treat it as correct, but there have been few effective trai
Deep-learning models for language generation tasks tend to produce repetitive output. Various methods have been proposed to encourage lexical diversity during decoding, but this often comes at a cost to the perceived fluency and adequacy of the outpu
Variational autoencoders have been studied as a promising approach to model one-to-many mappings from context to response in chat response generation. However, they often fail to learn proper mappings. One of the reasons for this failure is the discr