Vis enkel innførsel

dc.contributor.advisorGambäck, Björn
dc.contributor.authorSandberg, Eirik Katnosa
dc.date.accessioned2018-10-30T15:00:22Z
dc.date.available2018-10-30T15:00:22Z
dc.date.created2018-06-27
dc.date.issued2018
dc.identifierntnudaim:20020
dc.identifier.urihttp://hdl.handle.net/11250/2570236
dc.description.abstractThis thesis describes the design and implementation of a variational autoencoder that generates blues solos. The architecture of the variational autoencoder is capable of capturing long-term dependencies in musical data, which is verified in experiments. A dataset of MIDI solos was manually extracted from a corpus of MIDI songs in the genre of blues and used to train a Long Short-Term Memory Recurrent Neural Network. Tools for extracting musical information from MIDI files into a format that can be used for training a network were designed, implemented, and verified. Results show that the network is able to generate solos that have significant variations from the training data. Some of the generated solos are capable of being mistaken for real solos, and a few even outperform real solos in certain aspects. However, in most cases limitations on the system that lead to losses in musical information, and a limited dataset, inhibits the system's ability to produce solos that are perceived as blues.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi (2 årig), Digital virksomhetsutvikling
dc.titleGenerating Blues Solos with Variational Autoencoders
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel