Deep Reservoir Computing Using Cellular Automata
Abstract
Recurrent Neural Networks (RNNs) is a prominent concept within artificial intelligence. RNNs are inspired by Biological Neural Networks (BNNs) and provide an intuitive representation of how BNNs work. Derived from the more generic Artificial Neural Networks, the recurrent ones are meant to be used for temporal tasks such as speech recognition because they are capable of memorizing historic input. However, RNNs are very time consuming to train as a result of their inherent nature. Recent inventions such as Echo State Networks and Liquid State Machines have been proposed as RNN alternatives, under the name of Reservoir Computing (RC). RC systems are far more easy to train. In this thesis, a Cellular Automata (CA) based Reservoir Computing (ReCA) system is implemented. Methods to map both binary and non-binary input data onto automata are employed, in addition to a recurrent architecture to handle sequential input. Furthermore, several ReCA systems are orchestrated in layers (deepReCA), where the input to layer l is the output of layer l-1. The DeepReCA is benchmarked with the long short-term memory tasks 5- and 20-bit, and the Japanese vowels time series classification dataset. Results of benchmarks are compared to state-of-the-art results. Subsequent layers were found to improve upon previous layers, though the improvement was observed to reach an asymptote. CA rules did have different effect on reservoir dynamics, some proved to be better in layer 1, and some proved to be better in subsequent layers. Some results came close to state-of-the-art performance, which makes the proposed system a viable option if less memory at the cost of accuracy is desired. Further tuning of system parameters as well as designing a more advanced input encoding stage is suggested as future work.