ترغب بنشر مسار تعليمي؟ اضغط هنا

Identification/Segmentation of Indian Regional Languages with Singular Value Decomposition based Feature Embedding

103   0   0.0 ( 0 )
 نشر من قبل Anirban Bhowmick
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

language identification (LID) is identifing a language in a given spoken utterance. Language segmentation is equally inportant as language identification where language boundaries can be spotted in a multi language utterance. In this paper, we have experimented with two schemes for language identification in Indian regional language context as very few works has been done. Singular value based feature embedding is used for both of the schemes. In first scheme, the singular value decomposition (SVD) is applied to the n-gram utterance matrix and in the second scheme, SVD is applied on the difference supervector matrix space. We have observed that in both the schemes, 55-65% singular value energy is sufficient to capture the language context. In n-gram based feature representation, we have seen that different skipgram models capture different language context. We have observed that for short test duration, supervector based feature representation is better but with a longer duration test signal, n-gram based feature performed better. We have also extended our work to explore language-based segmentation where we have seen that segmentation accuracy of four language group with ten language training model, scheme-1 has performed well but with same four language training model, scheme-2 outperformed scheme-1



قيم البحث

اقرأ أيضاً

In this paper, we conduct one of the very first studies for cross-corpora performance evaluation in the spoken language identification (LID) problem. Cross-corpora evaluation was not explored much in LID research, especially for the Indian languages. We have selected three Indian spoken language corpora: IIITH-ILSC, LDC South Asian, and IITKGP-MLILSC. For each of the corpus, LID systems are trained on the state-of-the-art time-delay neural network (TDNN) based architecture with MFCC features. We observe that the LID performance degrades drastically for cross-corpora evaluation. For example, the system trained on the IIITH-ILSC corpus shows an average EER of 11.80 % and 43.34 % when evaluated with the same corpora and LDC South Asian corpora, respectively. Our preliminary analysis shows the significant differences among these corpora in terms of mismatch in the long-term average spectrum (LTAS) and signal-to-noise ratio (SNR). Subsequently, we apply different feature level compensation methods to reduce the cross-corpora acoustic mismatch. Our results indicate that these feature normalization schemes can help to achieve promising LID performance on cross-corpora experiments.
In this work, we propose an approach that features deep feature embedding learning and hierarchical classification with triplet loss function for Acoustic Scene Classification (ASC). In the one hand, a deep convolutional neural network is firstly tra ined to learn a feature embedding from scene audio signals. Via the trained convolutional neural network, the learned embedding embeds an input into the embedding feature space and transforms it into a high-level feature vector for representation. In the other hand, in order to exploit the structure of the scene categories, the original scene classification problem is structured into a hierarchy where similar categories are grouped into meta-categories. Then, hierarchical classification is accomplished using deep neural network classifiers associated with triplet loss function. Our experiments show that the proposed system achieves good performance on both the DCASE 2018 Task 1A and 1B datasets, resulting in accuracy gains of 15.6% and 16.6% absolute over the DCASE 2018 baseline on Task 1A and 1B, respectively.
An algorithm of the tensor renormalization group is proposed based on a randomized algorithm for singular value decomposition. Our algorithm is applicable to a broad range of two-dimensional classical models. In the case of a square lattice, its comp utational complexity and memory usage are proportional to the fifth and the third power of the bond dimension, respectively, whereas those of the conventional implementation are of the sixth and the fourth power. The oversampling parameter larger than the bond dimension is sufficient to reproduce the same result as full singular value decomposition even at the critical point of the two-dimensional Ising model.
In this paper, we discuss an attempt to develop an automatic language identification system for 5 closely-related Indo-Aryan languages of India, Awadhi, Bhojpuri, Braj, Hindi and Magahi. We have compiled a comparable corpora of varying length for the se languages from various resources. We discuss the method of creation of these corpora in detail. Using these corpora, a language identification system was developed, which currently gives state of the art accuracy of 96.48%. We also used these corpora to study the similarity between the 5 languages at the lexical level, which is the first data-based study of the extent of closeness of these languages.
Statistical analysis of large data sets offers new opportunities to better understand many processes. Yet, data accumulation often implies relaxing acquisition procedures or compounding diverse sources. As a consequence, such data sets often contain mixed data, i.e. both quantitative and qualitative and many missing values. Furthermore, aggregated data present a natural textit{multilevel} structure, where individuals or samples are nested within different sites, such as countries or hospitals. Imputation of multilevel data has therefore drawn some attention recently, but current solutions are not designed to handle mixed data, and suffer from important drawbacks such as their computational cost. In this article, we propose a single imputation method for multilevel data, which can be used to complete either quantitative, categorical or mixed data. The method is based on multilevel singular value decomposition (SVD), which consists in decomposing the variability of the data into two components, the between and within groups variability, and performing SVD on both parts. We show on a simulation study that in comparison to competitors, the method has the great advantages of handling data sets of various size, and being computationally faster. Furthermore, it is the first so far to handle mixed data. We apply the method to impute a medical data set resulting from the aggregation of several data sets coming from different hospitals. This application falls in the framework of a larger project on Trauma patients. To overcome obstacles associated to the aggregation of medical data, we turn to distributed computation. The method is implemented in an R package.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا