Deezer trained the AI using raw audio signals, linguistic context reconstruction models and a Million Song Dataset that aggregates Last.fm tags describing tunes (such as R20;calm” or R20;sad”). The researchers mapped the MSD to Deezer’s library using song metadata, extracting individual words from the lyrics in the process. The result was an 18,644-song database the team could use to both train AI on song moods and to test its theories.
The system is merely average at detecting the mood of a song based on lyrics. However, the association between audio and lyrics helped it gauge the energy of a given piece more effectively than past techniques. This could help identify the difference between a soothing downtempo piece and an upbeat dance track, as an example.
This isn’t ready for use in services like Deezer. The research group wants to look at different training models (such as an unsupervised system that looks at huge volumes of unlabeled info) to improve the accuracy. You can see where this might go, mind you. Deezer could automatically generate playlists that cater to a wide variety of moods without having to tag every song by hand. You could listen to songs that are just upbeat enough to improve your spirits, or slow enough to set you at ease without putting you to sleep.