Language Models for Music Recommendation
- Level:
- beginner
- Room:
- terrace 2a
- Start:
- Duration:
- 30 minutes
Abstract
Music streaming services like Spotify and youtube are famous for their recommendation systems and each service takes a unique approach to recommending and personalize content. While most users are happy with the recommendations provided, there are a section of users who are curious how and why a certain track is recommended. Complex recommendation systems take various factors like track metadata, user metadata, and play counts along with the track content itself.
Inspired by Andrej Karpathy to build an own GPT, we have to use Language Models to build our own music recommendation system.
Description
Music streaming services like spotify and youtube are famous for their recommendation systems and each service takes a unique approach to recommend and personalize content. While most users are happy with the recommendations provided, there are a section of users who are curious how and why a certain track is recommended. Complex recommendation systems take various factors like track metadata, user metadata, play counts along with the track content itself.
As music aficionados who love techno, trance, deep house and classical genres, we want to understand the following questions:
- Can we analyze the signals from the song track and identify the different instruments used?
- How can we create embeddings of all the tracks and index these for further analysis?
- How do we create a simple User interface to pick a song track, retrieve relevant embeddings from a section of the track and get recommendations based on just the music content.
- As a side effect, can I retrieve similar sections of music across various tracks.
- Using audio LMs, can we generate high quality music based of the embeddings?