Do you want to publish a course? Click here

Towards the bio-personalization of music recommendation systems: A single-sensor EEG biomarker of subjective music preference

104   0   0.0 ( 0 )
 Added by Dimitrios Adamos Dr
 Publication date 2016
and research's language is English




Ask ChatGPT about the research

Recent advances in biosensors technology and mobile electroencephalographic (EEG) interfaces have opened new application fields for cognitive monitoring. A computable biomarker for the assessment of spontaneous aesthetic brain responses during music listening is introduced here. It derives from well-established measures of cross-frequency coupling (CFC) and quantifies the music-induced alterations in the dynamic relationships between brain rhythms. During a stage of exploratory analysis, and using the signals from a suitably designed experiment, we established the biomarker, which acts on brain activations recorded over the left prefrontal cortex and focuses on the functional coupling between high-beta and low-gamma oscillations. Based on data from an additional experimental paradigm, we validated the introduced biomarker and showed its relevance for expressing the subjective aesthetic appreciation of a piece of music. Our approach resulted in an affordable tool that can promote human-machine interaction and, by serving as a personalized music annotation strategy, can be potentially integrated into modern flexible music recommendation systems. Keywords: Cross-frequency coupling; Human-computer interaction; Brain-computer interface



rate research

Read More

62 - Fotis Kalaganis 2017
We investigated the possibility of using a machine-learning scheme in conjunction with commercial wearable EEG-devices for translating listeners subjective experience of music into scores that can be used in popular on-demand music streaming services. Our study resulted into two variants, differing in terms of performance and execution time, and hence, subserving distinct applications in online streaming music platforms. The first method, NeuroPicks, is extremely accurate but slower. It is based on the well-established neuroscientific concepts of brainwave frequency bands, activation asymmetry index and cross frequency coupling (CFC). The second method, NeuroPicksVQ, offers prompt predictions of lower credibility and relies on a custom-built version of vector quantization procedure that facilitates a novel parameterization of the music-modulated brainwaves. Beyond the feature engineering step, both methods exploit the inherent efficiency of extreme learning machines (ELMs) so as to translate, in a personalized fashion, the derived patterns into a listeners score. NeuroPicks method may find applications as an integral part of contemporary music recommendation systems, while NeuroPicksVQ can control the selection of music tracks. Encouraging experimental results, from a pragmatic use of the systems, are presented.
Descriptions are often provided along with recommendations to help users discovery. Recommending automatically generated music playlists (e.g. personalised playlists) introduces the problem of generating descriptions. In this paper, we propose a method for generating music playlist descriptions, which is called as music captioning. In the proposed method, audio content analysis and natural language processing are adopted to utilise the information of each track.
Modeling of music audio semantics has been previously tackled through learning of mappings from audio data to high-level tags or latent unsupervised spaces. The resulting semantic spaces are theoretically limited, either because the chosen high-level tags do not cover all of music semantics or because audio data itself is not enough to determine music semantics. In this paper, we propose a generic framework for semantics modeling that focuses on the perception of the listener, through EEG data, in addition to audio data. We implement this framework using a novel end-to-end 2-view Neural Network (NN) architecture and a Deep Canonical Correlation Analysis (DCCA) loss function that forces the semantic embedding spaces of both views to be maximally correlated. We also detail how the EEG dataset was collected and use it to train our proposed model. We evaluate the learned semantic space in a transfer learning context, by using it as an audio feature extractor in an independent dataset and proxy task: music audio-lyrics cross-modal retrieval. We show that our embedding model outperforms Spotify features and performs comparably to a state-of-the-art embedding model that was trained on 700 times more data. We further discuss improvements to the model that are likely to improve its performance.
Simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) can be used to non-invasively measure the spatiotemporal dynamics of the human brain. One challenge is dealing with the artifacts that each modality introduces into the other when the two are recorded concurrently, for example the ballistocardiogram (BCG). We conducted a preliminary comparison of three different MR compatible EEG recording systems and assessed their performance in terms of single-trial classification of the EEG when simultaneously collecting fMRI. We found tradeoffs across all three systems, for example varied ease of setup and improved classification accuracy with reference electrodes (REF) but not for pulse artifact subtraction (PAS) or reference layer adaptive filtering (RLAF).
The Mozart effect refers to scientific data on short-term improvement on certain mental tasks after listening to Mozart, and also to its popularized version that listening to Mozart makes you smarter (Tomatis, 1991; Wikipedia, 2012). Does Mozart effect point to a fundamental cognitive function of music? Would such an effect of music be due to the hedonicity, a fundamental dimension of mental experience? The present paper explores a recent hypothesis that music helps to tolerate cognitive dissonances and thus enabled accumulation of knowledge and human cultural evolution (Perlovsky, 2010, 2012). We studied whether the influence of music is related to its hedonicity and whether pleasant or unpleasant music would influence scholarly test performance and cognitive dissonance. Specific hypotheses evaluated here are that during a test students experience contradictory cognitions that cause cognitive dissonances. If some music helps to tolerate cognitive dissonances, then first, this music should increase the duration during which participants can tolerate stressful conditions while evaluating test choices. Second, this should result in improved performance. These hypotheses are tentatively confirmed in the reported experiments as the agreeable music was correlated with better performance above that under indifferent or unpleasant music. It follows that music likely performs a fundamental cognitive function explaining the origin and evolution of musical ability considered previously a mystery.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا