Do you want to publish a course? Click here

Automatic Prosody Generation for Arabic Text- To - Speech Systems

دراسة تنغيم الكلام المركب باللغة العربية و توليده آلياً

1561   0   35   0 ( 0 )
 Publication date 2011
and research's language is العربية
 Created by Shamra Editor




Ask ChatGPT about the research

The main purpose of the present research is to support Arabic Text- to - Speech synthesizers, with natural prosody, based on linguistic analysis of texts to synthesize, and automatic prosody generation, using rules which are deduced from recorded signals analysis, of different types of sentences in Arabic. All the types of Arabic sentences (declarative and constructive) were enumerated with the help of an expert in Arabic linguistics . A textual corpus of about 2500 sentences covering most of these types was built and recorded both in natural prosody and without prosody. Later, these sentences were analyzed to extract prosody effect on the signal parameters, and to build prosody generation rules. In this paper, we present the results on negation sentences, applied on synthesized speech using the open source tool MBROLA. The results can be used with any parametric Arabic synthesizer. Future work will apply the rules on a new Arabic synthesizer based on semi-syllables units, which is under development in the Higher Institute for Applied Sciences and Technology.



References used
Thomas, Craig. Automatic Generation of French Speech (2004). The ACM Student Magazine
Khorasgani, R. R. (n.d.). A Survey on Current Prosodic Modeling Methods. Edmonton, Canada: Department of Computing Science, University of Alberta
Beckman, Mary E.; Hirschberg, Julia. The ToBI Annotation Conventions, Ohio State University, Tech. Rep, 1994
rate research

Read More

In the present work, we present our Arabic Semi-Syllable Synthesizer. The work consists of seven steps: (1) building a Semi-Syllable Speech Database for Arabic Semi-Syllable Synthesizer, (2) building the Natural Language Processing Module which compr ises a Text Pre-processing Module and a Text to Phoneme conversion using Arabic Transcription from Orthographic to Phonemes, (3) followed by a Phoneme to Semi-Syllables Mapping using a Syllabification Expert System, (4) an Acoustic Word Stress Analysis for Continuous Arabic Speech based on the three prosodic parameters (fundamental frequency, intensity, duration) in order to detect stressed syllables.
In general, the aim of an automatic speech recognition system is to write down what is said. State of the art continuous speech recognition systems consist of four basic modules: the signal processing, the acoustic modeling, the language modeling and the search engine. While isolated word recognition systems do not contain language modeling, which is responsible for connecting words together to form understandable sentences.
While abstractive summarization in certain languages, like English, has already reached fairly good results due to the availability of trend-setting resources, like the CNN/Daily Mail dataset, and considerable progress in generative neural models, pr ogress in abstractive summarization for Arabic, the fifth most-spoken language globally, is still in baby shoes. While some resources for extractive summarization have been available for some time, in this paper, we present the first corpus of human-written abstractive news summaries in Arabic, hoping to lay the foundation of this line of research for this important language. The dataset consists of more than 21 thousand items. We used this dataset to train a set of neural abstractive summarization systems for Arabic by fine-tuning pre-trained language models such as multilingual BERT, AraBERT, and multilingual BART-50. As the Arabic dataset is much smaller than e.g. the CNN/Daily Mail dataset, we also applied cross-lingual knowledge transfer to significantly improve the performance of our baseline systems. The setups included two M-BERT-based summarization models originally trained for Hungarian/English and a similar system based on M-BART-50 originally trained for Russian that were further fine-tuned for Arabic. Evaluation of the models was performed in terms of ROUGE, and a manual evaluation of fluency and adequacy of the models was also performed.
The National Virtual Translation Center (NVTC) seeks to acquire human language technology (HLT) tools that will facilitate its mission to provide verbatim English translations of foreign language audio and video files. In the text domain, NVTC has be en using translation memory (TM) for some time and has reported on the incorporation of machine translation (MT) into that workflow (Miller et al., 2020). While we have explored the use of speech-totext (STT) and speech translation (ST) in the past (Tzoukermann and Miller, 2018), we have now invested in the creation of a substantial human-made corpus to thoroughly evaluate alternatives. Results from our analysis of this corpus and the performance of HLT tools point the way to the most promising ones to deploy in our workflow.
Medical simulators provide a controlled environment for training and assessing clinical skills. However, as an assessment platform, it requires the presence of an experienced examiner to provide performance feedback, commonly preformed using a task s pecific checklist. This makes the assessment process inefficient and expensive. Furthermore, this evaluation method does not provide medical practitioners the opportunity for independent training. Ideally, the process of filling the checklist should be done by a fully-aware objective system, capable of recognizing and monitoring the clinical performance. To this end, we have developed an autonomous and a fully automatic speech-based checklist system, capable of objectively identifying and validating anesthesia residents' actions in a simulation environment. Based on the analyzed results, our system is capable of recognizing most of the tasks in the checklist: F1 score of 0.77 for all of the tasks, and F1 score of 0.79 for the verbal tasks. Developing an audio-based system will improve the experience of a wide range of simulation platforms. Furthermore, in the future, this approach may be implemented in the operation room and emergency room. This could facilitate the development of automatic assistive technologies for these domains.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا