Interested to know more about the underlying implementation of the project? We answer some of the questions you might have about the project. For a deeper dive, take a look at our source code on GitHub.
View Source CodeWhen you begin playing on the piano, we keep track of the notes you've been playing. This data is represented by a series of numbers that represent information such as the pitch of the note and the time the note was played. When the AI starts playing, this data is fed into the model, which is used to continue the sequence of music.
The entire system is powered by a Recurrent Neural Network (RNN). An RNN is just a type of neural network that can make predictions based on prior inputs in a sequence provided to it. Hence, when the AI is provided with a sequence of past notes played, it reads in this information, and outputs a list of probabilities for each note, representing the what the model thinks should be played next. We then sample a note based on the probabilities. Typically, the note with the highest probability will be played, but to improve the creativity of the model, we sometimes pick a less likely note.
The front end user interface of the project was built using HTML and CSS, using Bootstrap 5 as the CSS framework. Tone JS was used to play the sounds made by the piano, and Magenta JS was used to build the RNN model. The model used has been trained on professional piano performances, from the Yamaha e-Piano competition.