On November 17th, I attended a seminar at Princeton University called “Diving into TensorFlow 2.0”. TensorFlow is a library available for Python (and other programming languages) that makes machine learning coding much easier. In addition, I was introduced to some important resources that will assist me in bringing music into machine learning. Ultimately, I learned a lot about the basics of machine learning and certain key concepts that will be useful in my thesis.
After the presentation, I talked to the presenter, Josh Gordon, who works for Google and teaches deep learning at Columbia University. He said he personally has not done much music and machine learning work, but he did send me a library and a bunch of projects that use it: https://magenta.tensorflow.org/
There were many cool projects I found on this website, but there was a project similar to mine that stood out right here: https://midi-me.glitch.me/
After playing with it for a bit, I noticed that its attempt at making songs similar to the inputted songs was far from perfect. I think the reasoning for this is that it does not take direct user feedback on how accurate the result was. I think that aspect will help make such program more accurate, since it would be impossible to make it accurate with a limited data set and a lack of user input. Another way I thought of that would make a program more accurate is giving it some basic music theory; for example, knowledge of song structure and maybe commonly used scales.
I won’t lie: seeing “MidiMe” did make my project seem a lot more daunting. But if I go about developing it differently, I think I can still come up with a good result.
