No Arabic abstract
The ability to accurately perceive whether a speaker is asking a question or is making a statement is crucial for any successful interaction. However, learning and classifying tonal patterns has been a challenging task for automatic speech recognition and for models of tonal representation, as tonal contours are characterized by significant variation. This paper provides a classification model of Cypriot Greek questions and statements. We evaluate two state-of-the-art network architectures: a Long Short-Term Memory (LSTM) network and a convolutional network (ConvNet). The ConvNet outperforms the LSTM in the classification task and exhibited an excellent performance with 95% classification accuracy.
Calculations of nuclei are often carried out in finite model spaces. Thus, finite-size corrections enter, and it is necessary to extrapolate the computed observables to infinite model spaces. In this work, we employ extrapolation methods based on artificial neural networks for observables such as the ground-state energy and the point-proton radius. We extrapolate results from no-core shell model and coupled-cluster calculations to very large model spaces and estimate uncertainties. Training the network on different data typically yields extrapolation results that cluster around distinct values. We show that a preprocessing of input data, and the inclusion of correlations among the input data, reduces the problem of multiple solutions and yields more stable extrapolated results and consistent uncertainty estimates. We perform extrapolations for ground-state energies and radii in $^{4}$He, $^{6}$Li, and $^{16}$O, and compare the predictions from neural networks with results from infrared extrapolations.
During the last couple of years, Recurrent Neural Networks (RNN) have reached state-of-the-art performances on most of the sequence modelling problems. In particular, the sequence to sequence model and the neural CRF have proved to be very effective in this domain. In this article, we propose a new RNN architecture for sequence labelling, leveraging gated recurrent layers to take arbitrarily long contexts into account, and using two decoders operating forward and backward. We compare several variants of the proposed solution and their performances to the state-of-the-art. Most of our results are better than the state-of-the-art or very close to it and thanks to the use of recent technologies, our architecture can scale on corpora larger than those used in this work.
What makes an artificial neural network easier to train and more likely to produce desirable solutions than other comparable networks? In this paper, we provide a new angle to study such issues under the setting of a fixed number of model parameters which in general is the most dominant cost factor. We introduce a notion of variability and show that it correlates positively to the activation ratio and negatively to a phenomenon called {Collapse to Constants} (or C2C), which is closely related but not identical to the phenomenon commonly known as vanishing gradient. Experiments on a styled model problem empirically verify that variability is indeed a key performance indicator for fully connected neural networks. The insights gained from this variability study will help the design of new and effective neural network architectures.
The identification of expanding HI shells is difficult because of their variable morphological characteristics. The detection of HI bubbles on a global scale therefore never has been attempted. In this paper, an automatic detector for expanding HI shells is presented. The detection is based on the more stable dynamical characteristics of expanding shells and is performed in two stages. The first one is the recognition of the dynamical signature of an expanding bubble in the velocity spectra, based on the classification of an artificial neural network. The pixels associated with these recognized spectra are identified on each velocity channel. The second stage consists in looking for concentrations of those pixels that were firstly pointed out, and to decide if they are potential detections by morphological and 21-cm emission variation considerations. Two test bubbles are correctly detected and a potentially new case of shell that is visually very convincing is discovered. About 0.6% of the surveyed pixels are identified as part of a bubble. These may be false detections, but still constitute regions of space with high probability of finding an expanding shell. The subsequent search field is thus significantly reduced. We intend to conduct in the near future a large scale HI shells detection over the Perseus Arm using our detector.
Morphological declension, which aims to inflect nouns to indicate number, case and gender, is an important task in natural language processing (NLP). This research proposal seeks to address the degree to which Recurrent Neural Networks (RNNs) are efficient in learning to decline noun cases. Given the challenge of data sparsity in processing morphologically rich languages and also, the flexibility of sentence structures in such languages, we believe that modeling morphological dependencies can improve the performance of neural network models. It is suggested to carry out various experiments to understand the interpretable features that may lead to a better generalization of the learned models on cross-lingual tasks.