Show simple item record

dc.contributor.authorEllefsen, Kai Olav
dc.contributor.authorMouret, Jean-Baptiste
dc.contributor.authorClune, Jeff
dc.date.accessioned2019-11-13T10:03:50Z
dc.date.available2019-11-13T10:03:50Z
dc.date.created2015-06-30T12:46:21Z
dc.date.issued2015
dc.identifier.citationPLoS Computational Biology. 2015, 11 (4), .nb_NO
dc.identifier.issn1553-734X
dc.identifier.urihttp://hdl.handle.net/11250/2628105
dc.description.abstractA long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new skill they typically do so by losing previously acquired skills, a problem called catastrophic forgetting. That occurs because, to learn the new task, neural learning algorithms change connections that encode previously acquired skills. How networks are organized critically affects their learning dynamics. In this paper, we test whether catastrophic forgetting can be reduced by evolving modular neural networks. Modularity intuitively should reduce learning interference between tasks by separating functionality into physically distinct modules in which learning can be selectively turned on or off. Modularity can further improve learning by having a reinforcement learning module separate from sensory processing modules, allowing learning to happen only in response to a positive or negative reward. In this paper, learning takes place via neuromodulation, which allows agents to selectively change the rate of learning for each neural connection based on environmental stimuli (e.g. to alter learning in specific locations based on the task at hand). To produce modularity, we evolve neural networks with a cost for neural connections. We show that this connection cost technique causes modularity, confirming a previous result, and that such sparsely connected, modular networks have higher overall performance because they learn new skills faster while retaining old skills more and because they have a separate reinforcement learning module. Our results suggest (1) that encouraging modularity in neural networks may help us overcome the long-standing barrier of networks that cannot learn new skills without forgetting old ones, and (2) that one benefit of the modularity ubiquitous in the brains of natural animals might be to alleviate the problem of catastrophic forgetting.nb_NO
dc.language.isoengnb_NO
dc.publisherPublic Library of Sciencenb_NO
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleNeural Modularity Helps Organisms Evolve to Learn New Skills without Forgetting Old Skillsnb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.description.versionpublishedVersionnb_NO
dc.source.pagenumber24nb_NO
dc.source.volume11nb_NO
dc.source.journalPLoS Computational Biologynb_NO
dc.source.issue4nb_NO
dc.identifier.doi10.1371/journal.pcbi.1004128
dc.identifier.cristin1251657
dc.description.localcodeCopyright: © 2015 Ellefsen et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are creditednb_NO
cristin.unitcode194,63,10,0
cristin.unitnameInstitutt for datateknologi og informatikk
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode2


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Navngivelse 4.0 Internasjonal
Except where otherwise noted, this item's license is described as Navngivelse 4.0 Internasjonal