Remembering Past States using Long Short Term Memory Neural Networks
MetadataShow full item record
This project explores the ability of Recurrent Neural Networks (RNNs) to memorize previous input states in time series problems.A type of RNN called Long Short Term Memory (LSTM), which is designed specifically to be able to handle long term dependencies in input data, is being compared against recurrent multi layer perceptrons (recurrent MLPs) on two non-trivial time series problems which require a varying number of previous events to be remembered to predict the next state. The first problem being that of artificial grammar learning: learning a randomly generated grammar by only being subjected to a series of sample strings produced by a nondeterministic symbol producing automaton. The second problem explored in this project is to train an agent, by reinforcement learning, to play a modified version of the computer game Flappy Bird.The results show that LSTM is able to outclass standard recurrent MLPs in the artificial grammar learning task since it can remember past states of the time series stream several timesteps after they have occurred, without any degradation.The LSTM based agent also manages to score substantially higher in the Flappy Bird game than both a feedforward and recurrent multi layer perceptron (MLP) based agent. The reason for this is possibly because it is more resistant to variations in the input.