Generating Blues Solos with Variational Autoencoders
MetadataShow full item record
This thesis describes the design and implementation of a variational autoencoder that generates blues solos. The architecture of the variational autoencoder is capable of capturing long-term dependencies in musical data, which is verified in experiments. A dataset of MIDI solos was manually extracted from a corpus of MIDI songs in the genre of blues and used to train a Long Short-Term Memory Recurrent Neural Network. Tools for extracting musical information from MIDI files into a format that can be used for training a network were designed, implemented, and verified. Results show that the network is able to generate solos that have significant variations from the training data. Some of the generated solos are capable of being mistaken for real solos, and a few even outperform real solos in certain aspects. However, in most cases limitations on the system that lead to losses in musical information, and a limited dataset, inhibits the system's ability to produce solos that are perceived as blues.