Tytuł pozycji:
A bimodal deep model to capture emotions from music tracks
This work aims to develop a deep model for automatically labeling music tracks in terms of induced emotions. The machine learning architecture consists of two components: one dedicated to lyric processing based on Natural Language Processing (NLP) and another devoted to music processing. These two components are combined at the decision-making level. To achieve this, a range of neural networks are explored for the task of emotion extraction from both lyrics and music. For lyric classification, three architectures are compared, i.e., a 4-layer neural network, FastText, and a transformer-based approach. For music classification, the architectures investigated include InceptionV3, a collection of models from the ResNet family, and a joint architecture combining Inception and ResNet. SVM serves as a baseline in both threads. The study explores three datasets of songs accompanied by lyrics, with MoodyLyrics4Q selected and preprocessed for model training. The bimodal approach, incorporating both lyrics and audio modules, achieves a classification accuracy of 60.7% in identifying emotions evoked by music pieces. The MoodyLyrics4Q dataset used in this study encompasses musical pieces spanning diverse genres, including rock, jazz, electronic, pop, blues, and country. The algorithms demonstrate reliable performance across the dataset, highlighting their robustness in handling a wide variety of musical styles.
Opracowanie rekordu ze środków MNiSW, umowa nr POPUL/SP/0154/2024/02 w ramach programu "Społeczna odpowiedzialność nauki II" - moduł: Popularyzacja nauki (2025).