Learning neural representations for the processing of temporal data in deep neutral networks
Abstract
Ever since the third spring of artificial intelligence a decadeago, representation learning through deep neural networks hasbeen the dominating approach for most research in machinelearning. However, typical deep neural networks in use todayare applied to narrow tasks with highly controlled and welldefined environments. For deep neural networks to be trulyuseful for real-world applications, they should be able to operatein and to model complex, highly dynamic and temporallydependent events and phenomena. In this thesis we investigatehow effective neural representations, suitable for real-worldapplications, can be learned. We first explore how learnedneural representations can benefit from including and thenincreasing temporal processing capabilities in deep neuralnetworks. Finding a positive correlation between increasingtemporal processing capabilities and performance, we theninvestigate how self-supervised learning can be leveraged forreal-world temporal applications. We find that self-supervisedlearning enables deep neural networks to learn superior neuralrepresentations over their supervised counterparts by utilizingunderlying structure in real-world temporal data. Finally,we investigate how the learned neural representations can beutilized outside the neural network to gain new insight intoreal-world application domains. We find that the learnedneural representations contain rich information that can informdecisions in a multitude of application domains. Our resultscould inspire further investigation into how researchers canlearn from the neural representations learned by deep neuralnetworks applied to real-world applications.