Vis enkel innførsel

dc.contributor.advisorDowning, Keith
dc.contributor.authorBerg, William Peer
dc.date.accessioned2016-09-19T14:01:09Z
dc.date.available2016-09-19T14:01:09Z
dc.date.created2016-06-09
dc.date.issued2016
dc.identifierntnudaim:14029
dc.identifier.urihttp://hdl.handle.net/11250/2408438
dc.description.abstractIn recent years, the possible applications of artificial intelligence (AI) and deep learning have increased drastically. However, the algorithms which constitute the learning mechanisms in deep learning are based largely on the same principles as when formalised about half a century ago. Namely using feed-forward back-propagation (FFBP) and gradient based techniques in order to train the artificial neural networks (ANNs). When training an FFBP ANN within a novel domain, it seems inevitable that this training will largely, and quite rapidly entirely disrupt the information which was formerly stored in the network. This phenomenon is called catastrophic interference, or forgetting, and remains a long-standing issue within the field. An architecture addressing this issue is the dual-network memory architecture, which by addressing two fundamental aspects of memory acquisition in neural networks, namely short- and long-term memory, reduces or eliminates catastrophic forgetting, as well as suggests biological implications. However, former implementations reduce catastrophic forgetting by employing pseudorehearsal, by implicitly re-training on the former weight configuration. While this provides a means of interleaving the former information with the new, it remains a slightly unrealistic training scheme. In order to address these crucial issues within the dual-network memory architecture, this thesis implements a more biologically plausible dual-network memory model, and a novel memory consolidation scheme. Building upon the work of Hattori (2014), a more biologically realistic short-term memory model is attained, from which information may be consolidated to a long-term memory model. The model and its associated behaviour is analyzed, and a novel parametrization and resulting memory consolidation mechanism is demonstrated. This mechanism reduces catastrophic forgetting without employing pseudorehearsal, when exposing the dual-network memory model to five consecutive and distinct, but correlated sets of training patterns. This demonstrates a potential neural mechanism for reducing catastrophic forgetting, which may operate in synthesis with, or instead of pseudorehearsal. This novel memory consolidation scheme is regarded as fairly biologically realistic, as it emerges from several hippocampal aspects that are empirically observed and documented within the literature. Furthermore, the mechanism illuminates several interesting emergent qualities of pattern extraction by chaotic recall in the attained hippocampal model.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi, Intelligente systemer
dc.titleShort- and Long-term Memory: A Complementary Dual-network Memory Model
dc.typeMaster thesis
dc.source.pagenumber135


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel