Do you want to publish a course? Click here

A Consumer BCI for Automated Music Evaluation Within a Popular On-Demand Music Streaming Service - Taking Listeners Brainwaves to Extremes

64   0   0.0 ( 0 )
 Added by Dimitrios Adamos Dr
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listeners subjective experience of music into scores that can be used for the automated annotation of music in popular on-demand streaming services. Based on the established -neuroscientifically sound- concepts of brainwave frequency bands, activation asymmetry index and cross-frequency-coupling (CFC), we introduce a Brain Computer Interface (BCI) system that automatically assigns a rating score to the listened song. Our research operated in two distinct stages: i) a generic feature engineering stage, in which features from signal-analytics were ranked and selected based on their ability to associate music induced perturbations in brainwaves with listeners appraisal of music. ii) a personalization stage, during which the efficiency of ex- treme learning machines (ELMs) is exploited so as to translate the derived pat- terns into a listeners score. Encouraging experimental results, from a pragmatic use of the system, are presented.



rate research

Read More

62 - Fotis Kalaganis 2017
We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listeners subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listeners score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.
Graphical passwords have been demonstrated to be the possible alternatives to traditional alphanumeric passwords. However, they still tend to follow predictable patterns that are easier to attack. The crux of the problem is users memory limitations. Users are the weakest link in password authentication mechanism. It shows that baroque music has positive effects on human memorizing and learning. We introduce baroque music to the PassPoints graphical password scheme and conduct a laboratory study in this paper. Results shown that there is no statistic difference between the music group and the control group without music in short-term recall experiments, both had high recall success rates. But in long-term recall, the music group performed significantly better. We also found that the music group tended to set significantly more complicated passwords, which are usually more resistant to dictionary and other guess attacks. But compared with the control group, the music group took more time to log in both in short-term and long-term tests. Besides, it appears that background music does not work in terms of hotspots.
We present a new approach to harmonic analysis that is trained to segment music into a sequence of chord spans tagged with chord labels. Formulated as a semi-Markov Conditional Random Field (semi-CRF), this joint segmentation and labeling approach enables the use of a rich set of segment-level features, such as segment purity and chord coverage, that capture the extent to which the events in an entire segment of music are compatible with a candidate chord label. The new chord recognition model is evaluated extensively on three corpora of classical music and a newly created corpus of rock music. Experimental results show that the semi-CRF model performs substantially better than previous approaches when trained on a sufficient number of labeled examples and remains competitive when the amount of training data is limited.
Objective evaluation (OE) is essential to artificial music, but its often very hard to determine the quality of OEs. Hitherto, subjective evaluation (SE) remains reliable and prevailing but suffers inevitable disadvantages that OEs may overcome. Therefore, a meta-evaluation system is necessary for designers to test the effectiveness of OEs. In this paper, we present Armor, a complex and cross-domain benchmark dataset that serves for this purpose. Since OEs should correlate with human judgment, we provide music as test cases for OEs and human judgment scores as touchstones. We also provide two meta-evaluation scenarios and their corresponding testing methods to assess the effectiveness of OEs. To the best of our knowledge, Armor is the first comprehensive and rigorous framework that future works could follow, take example by, and improve upon for the task of evaluating computer-generated music and the field of computational music as a whole. By analyzing different OE methods on our dataset, we observe that there is still a huge gap between SE and OE, meaning that hard-coded algorithms are far from catching humans judgment to the music.
Recent advances in biosensors technology and mobile electroencephalographic (EEG) interfaces have opened new application fields for cognitive monitoring. A computable biomarker for the assessment of spontaneous aesthetic brain responses during music listening is introduced here. It derives from well-established measures of cross-frequency coupling (CFC) and quantifies the music-induced alterations in the dynamic relationships between brain rhythms. During a stage of exploratory analysis, and using the signals from a suitably designed experiment, we established the biomarker, which acts on brain activations recorded over the left prefrontal cortex and focuses on the functional coupling between high-beta and low-gamma oscillations. Based on data from an additional experimental paradigm, we validated the introduced biomarker and showed its relevance for expressing the subjective aesthetic appreciation of a piece of music. Our approach resulted in an affordable tool that can promote human-machine interaction and, by serving as a personalized music annotation strategy, can be potentially integrated into modern flexible music recommendation systems. Keywords: Cross-frequency coupling; Human-computer interaction; Brain-computer interface

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا