توضح هذه الورقة أنظمة ترجمة الكلام غير المتصلة بالإنترنت والمزخرفة التي تم تطويرها في Apptek for IWSLT 2021. يتضمن خضائه غير المتصل للنظام المباشر للنظام المباشر والنموذج المتكامل الخفي المزعوم، وهو أقرب إلى نظام Cascade ولكن تم تدريبهفي أزياء نهاية إلى نهاية، حيث تكون جميع الوحدات المتطرفة النماذج نهاية إلى نهاية نفسها.بالنسبة إلى القديس المتزامن، نجمع بين التعرف على الكلام التلقائي الهجينة بنهج ترجمة آلية يتم تعلم قرارات سياسة الترجمة من محاذاة الكلمة الإحصائية.مقارنة بالعام الماضي، نحسن الجودة العامة وتوفير مجموعة واسعة من مفاضات الجودة / الكمون، سواء بسبب طريقة تكبير البيانات مما يجعل نموذج MT قويا بأحجام قطع قطع متنوعة.أخيرا، نقدم طريقة تجزئة إخراج ASR إلى جمل تقدم أقل تأخير إضافي.
This paper describes the offline and simultaneous speech translation systems developed at AppTek for IWSLT 2021. Our offline ST submission includes the direct end-to-end system and the so-called posterior tight integrated model, which is akin to the cascade system but is trained in an end-to-end fashion, where all the cascaded modules are end-to-end models themselves. For simultaneous ST, we combine hybrid automatic speech recognition with a machine translation approach whose translation policy decisions are learned from statistical word alignments. Compared to last year, we improve general quality and provide a wider range of quality/latency trade-offs, both due to a data augmentation method making the MT model robust to varying chunk sizes. Finally, we present a method for ASR output segmentation into sentences that introduces a minimal additional delay.
References used
https://aclanthology.org/
This paper describes KIT'submission to the IWSLT 2021 Offline Speech Translation Task. We describe a system in both cascaded condition and end-to-end condition. In the cascaded condition, we investigated different end-to-end architectures for the spe
Transformer-based models have gained increasing popularity achieving state-of-the-art performance in many research fields including speech translation. However, Transformer's quadratic complexity with respect to the input sequence length prevents its
In this paper, we describe Zhejiang University's submission to the IWSLT2021 Multilingual Speech Translation Task. This task focuses on speech translation (ST) research across many non-English source languages. Participants can decide whether to work
The paper describes BUT's English to German offline speech translation (ST) systems developed for IWSLT2021. They are based on jointly trained Automatic Speech Recognition-Machine Translation models. Their performances is evaluated on MustC-Common te
This paper describes the ESPnet-ST group's IWSLT 2021 submission in the offline speech translation track. This year we made various efforts on training data, architecture, and audio segmentation. On the data side, we investigated sequence-level knowl