Moodo data fusion model

[Published paper] The Moodo dataset: Integrating user context with emotional and color perception of music for affective music information retrieval

My colleagues and I have recently published a new scientific paper which presents the Moodo dataset and performs a detailed analysis of the multimodal data. The dataset includes user background data along with music preferences and music knowhow. In an attempt to show the future development of the recommender systems in general, we have applied a data fusion model which models the users’ data to show how personalized systems can improve the recommendation.

Paper can be found here: http://www.tandfonline.com/doi/abs/10.1080/09298215.2017.1333518

The effects of different emotion combinations on the perception of musical emotions for Anticipation, Liveliness and Tension. The heatmap shows the differences in the valence-arousal ratings as influenced by the associated negative (Anger, Fear) and positive (Happiness, Joy) emotions.

The effects of different emotion combinations on the perception of musical emotions for Anticipation, Liveliness and Tension. The heatmap shows the differences in the valence-arousal ratings as influenced by the associated negative (Anger, Fear) and positive (Happiness, Joy) emotions.

Abstract

This paper presents a new multimodal dataset Moodo that can aid the development of affective music information retrieval systems. Moodo’s main novelties are a multimodal approach that links emotional and color perception to music and the inclusion of user context. Analysis of the dataset reveals notable differences in emotion-color associations and their valence-arousal ratings in non-music and music context. We also show differences in ratings of perceived and induced emotions, especially for those with perceived negative connotation, as well as the influence of genre and user context on perception of emotions. By applying an intermediate data fusion model, we demonstrate the importance of user profiles for predictive modeling in affective music information retrieval scenarios.

No Comments

Leave a Comment