In the past week, I was finally able to get my program to generate a song of more than one note. Using the AC/DC songs Hell’s Bells and Back in Black, it generated this song: https://clyp.it/k32srox2?token=fbd7d03bf3eacbb19e0a2b459dd2cf52
I noticed that in this case (and in the case of a few other songs I generated) the generated song seems to start off a bit random, and then settle into one riff. I think I have an idea of why, so I will look into that.
My next goal is to continue transcribing AC/DC songs, since having a sample size of two songs obviously makes it sound almost the same as the two songs. However, I am thinking about taking a different approach to it. Instead of feeding it whole songs, I’m thinking about feeding it riffs or melodies, and then allowing the program to piece these riffs or melodies together to create a song, since I’ve been using modern day music with more repetition than other styles.
I also gave an early version of my elevator pitch in class, which was along the lines of this:
“My project is a program that is able to create music based off influence from music scores available to it.”
Based off of feedback from my peers in class and from other friends outside of class, I realized that I needed to be more specific on what it does and how it works. Here is my new and improved elevator pitch:
“My project is a machine learning program that takes songs of a certain style as input, learns what makes them unique, and generates a song based off of the input. For example, you can give the program some scores of Beatles songs. It will figure out what features make all of those Beatles songs similar using an algorithmic process, and it will then generate a song that utilizes these features. The result is a what a Beatles song would sound like if they were still together.”