Information Theory for Analyzing Neural Networks
Master thesis
Permanent lenke
http://hdl.handle.net/11250/253759Utgivelsesdato
2014Metadata
Vis full innførselSamlinger
Sammendrag
The goal of this thesis was to investigate how information theory could be used to analyze artificial neural networks. For this purpose, two problems, a classification problem and a controller problem were considered. The classification problem was solved with a feedforward neural network trained with backpropagation, the controller problem was solved with a continuous-time recurrent neural network optimized with evolution.Results from the classification problem shows that mutual information might indicate how much a particular neuron contributes to the classification. Tracking these neurons' mutual information during training might serve as an indicator of their progression, including neurons in the hidden layers.Results from the controller problem showed that time-delayed mutual information between a neuron and an environment variable might indicate what variable each neuron is estimating, and tracking this during evolution might tell us when this particular neuron started taking this role. Furthermore, unrolled transfer entropy appears to be a good measure for how neurons affect each other during simulation.