Vis enkel innførsel

dc.contributor.advisorDowning, Keith
dc.contributor.advisorBach, Kerstin
dc.contributor.advisorMisimi, Ekrem
dc.contributor.authorMåløy, Håkon
dc.date.accessioned2023-01-20T14:22:23Z
dc.date.available2023-01-20T14:22:23Z
dc.date.issued2023
dc.identifier.isbn978-82-326-5263-1
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/3045018
dc.description.abstractEver since the third spring of artificial intelligence a decadeago, representation learning through deep neural networks hasbeen the dominating approach for most research in machinelearning. However, typical deep neural networks in use todayare applied to narrow tasks with highly controlled and welldefined environments. For deep neural networks to be trulyuseful for real-world applications, they should be able to operatein and to model complex, highly dynamic and temporallydependent events and phenomena. In this thesis we investigatehow effective neural representations, suitable for real-worldapplications, can be learned. We first explore how learnedneural representations can benefit from including and thenincreasing temporal processing capabilities in deep neuralnetworks. Finding a positive correlation between increasingtemporal processing capabilities and performance, we theninvestigate how self-supervised learning can be leveraged forreal-world temporal applications. We find that self-supervisedlearning enables deep neural networks to learn superior neuralrepresentations over their supervised counterparts by utilizingunderlying structure in real-world temporal data. Finally,we investigate how the learned neural representations can beutilized outside the neural network to gain new insight intoreal-world application domains. We find that the learnedneural representations contain rich information that can informdecisions in a multitude of application domains. Our resultscould inspire further investigation into how researchers canlearn from the neural representations learned by deep neuralnetworks applied to real-world applications.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2023:6
dc.titleLearning neural representations for the processing of temporal data in deep neutral networksen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Teknologi: 500en_US
dc.description.localcodeFulltext not availableen_US


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel