Do you want to publish a course? Click here

Using Speech Technology for Quantifying Behavioral Characteristics in Peer-Led Team Learning Sessions

45   0   0.0 ( 0 )
 Added by Harishchandra Dubey
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Peer-Led Team Learning (PLTL) is a learning methodology where a peer-leader co-ordinate a small-group of students to collaboratively solve technical problems. PLTL have been adopted for various science, engineering, technology and maths courses in several US universities. This paper proposed and evaluated a speech system for behavioral analysis of PLTL groups. It could help in identifying the best practices for PLTL. The CRSS-PLTL corpus was used for evaluation of developed algorithms. In this paper, we developed a robust speech activity detection (SAD) by fusing the outputs of a DNN-based pitch extractor and an unsupervised SAD based on voicing measures. Robust speaker diarization system consisted of bottleneck features (from stacked autoencoder) and informed HMM-based joint segmentation and clustering system. Behavioral characteristics such as participation, dominance, emphasis, curiosity and engagement were extracted by acoustic analyses of speech segments belonging to all students. We proposed a novel method for detecting question inflection and performed equal error rate analysis on PLTL corpus. In addition, a robust approach for detecting emphasized speech regions was also proposed. Further, we performed exploratory data analysis for understanding the distortion present in CRSS-PLTL corpus as it was collected in naturalistic scenario. The ground-truth Likert scale ratings were used for capturing the team dynamics in terms of students responses to a variety of evaluation questions. Results suggested the applicability of proposed system for behavioral analysis of small-group conversations such as PLTL, work-place meetings etc.. Keywords- Behavioral Speech Processing, Bottleneck Features, Curiosity, Deep Neural Network, Dominance, Auto-encoder, Emphasis, Engagement, Peer-Led Team Learning, Speaker Diarization, Small-group Conversations

rate research

Read More

Peer-led team learning (PLTL) is a model for teaching STEM courses where small student groups meet periodically to collaboratively discuss coursework. Automatic analysis of PLTL sessions would help education researchers to get insight into how learning outcomes are impacted by individual participation, group behavior, team dynamics, etc.. Towards this, speech and language technology can help, and speaker diarization technology will lay the foundation for analysis. In this study, a new corpus is established called CRSS-PLTL, that contains speech data from 5 PLTL teams over a semester (10 sessions per team with 5-to-8 participants in each team). In CRSS-PLTL, every participant wears a LENA device (portable audio recorder) that provides multiple audio recordings of the event. Our proposed solution is unsupervised and contains a new online speaker change detection algorithm, termed G 3 algorithm in conjunction with Hausdorff-distance based clustering to provide improved detection accuracy. Additionally, we also exploit cross channel information to refine our diarization hypothesis. The proposed system provides good improvements in diarization error rate (DER) over the baseline LIUM system. We also present higher level analysis such as the number of conversational turns taken in a session, and speaking-time duration (participation) for each speaker.
As an indispensable part of modern human-computer interaction system, speech synthesis technology helps users get the output of intelligent machine more easily and intuitively, thus has attracted more and more attention. Due to the limitations of high complexity and low efficiency of traditional speech synthesis technology, the current research focus is the deep learning-based end-to-end speech synthesis technology, which has more powerful modeling ability and a simpler pipeline. It mainly consists of three modules: text front-end, acoustic model, and vocoder. This paper reviews the research status of these three parts, and classifies and compares various methods according to their emphasis. Moreover, this paper also summarizes the open-source speech corpus of English, Chinese and other languages that can be used for speech synthesis tasks, and introduces some commonly used subjective and objective speech quality evaluation method. Finally, some attractive future research directions are pointed out.
Deep clustering (DC) and utterance-level permutation invariant training (uPIT) have been demonstrated promising for speaker-independent speech separation. DC is usually formulated as two-step processes: embedding learning and embedding clustering, which results in complex separation pipelines and a huge obstacle in directly optimizing the actual separation objectives. As for uPIT, it only minimizes the chosen permutation with the lowest mean square error, doesnt discriminate it with other permutations. In this paper, we propose a discriminative learning method for speaker-independent speech separation using deep embedding features. Firstly, a DC network is trained to extract deep embedding features, which contain each sources information and have an advantage in discriminating each target speakers. Then these features are used as the input for uPIT to directly separate the different sources. Finally, uPIT and DC are jointly trained, which directly optimizes the actual separation objectives. Moreover, in order to maximize the distance of each permutation, the discriminative learning is applied to fine tuning the whole model. Our experiments are conducted on WSJ0-2mix dataset. Experimental results show that the proposed models achieve better performances than DC and uPIT for speaker-independent speech separation.
This paper describes the results of an informal collaboration launched during the African Master of Machine Intelligence (AMMI) in June 2020. After a series of lectures and labs on speech data collection using mobile applications and on self-supervised representation learning from speech, a small group of students and the lecturer continued working on automatic speech recognition (ASR) project for three languages: Wolof, Ga, and Somali. This paper describes how data was collected and ASR systems developed with a small amount (1h) of transcribed speech as training data. In these low resource conditions, pre-training a model on large amounts of raw speech was fundamental for the efficiency of ASR systems developed.
Neural network based speech recognition systems suffer from performance degradation due to accented speech, especially unfamiliar accents. In this paper, we study the supervised contrastive learning framework for accented speech recognition. To build different views (similar positive data samples) for contrastive learning, three data augmentation techniques including noise injection, spectrogram augmentation and TTS-same-sentence generation are further investigated. From the experiments on the Common Voice dataset, we have shown that contrastive learning helps to build data-augmentation invariant and pronunciation invariant representations, which significantly outperforms traditional joint training methods in both zero-shot and full-shot settings. Experiments show that contrastive learning can improve accuracy by 3.66% (zero-shot) and 3.78% (full-shot) on average, comparing to the joint training method.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا