dc.description.abstract | A longstanding challenge in artificial intelligence is to create agents that learn, enabling
them to interact with and adapt to a complex and changing world. A better understanding
of the evolution of learning may help produce robust and adaptive agents, as well as shed
light on open questions about the evolution of learning from biology. Evolutionary computation
offers the benefits of precise experimental control, repeatability of experiments
and rapid generational turnover – enabling experiments to test hypotheses that would be
impossible or extremely time demanding to test in natural studies.
The evolution of learning is influenced by the balance between the benefits offered by
adaptivity and the costs (disadvantages) individuals pay for learning abilities. Such costs
include forgetting previous knowledge, dangers of exploration and maintenance of neural
structures for learning. This thesis focuses on how evolution regulates learning capacities
to reap the benefits of being adaptive, while minimizing the costs of learning. The regulation
of learning capacities is studied along three main axes: regulation through individual
lifetimes, regulation within a population facing varying environments and regulation
across neural modules.
The study of learning regulation within individual lifetimes is inspired by the sensitive
periods in learning observed in nature: limited periods within individuals’ lives where
learning is temporarily facilitated. Experiments herein demonstrate that sensitive periods
can emerge to schedule learning in tasks where there are dependencies between the
learning of sub-tasks, and further explore how the flexibility of evolved sensitive periods
depends on assumptions about which factors regulate plasticity.
On the population level, the evolution of learning efforts is known to be highly dependent
on the variability of the environment and the reliability of environmental stimuli. Evolving
the innate preferences and learning rates of individuals across a wide range of environmental
variability demonstrates that environments changing too rapidly or too slowly
discourage the evolution of learning. Further experiments show how independently varying
the degrees of environmental stability and stimuli reliability leads to a refinement of
this model of learning, which also acknowledges the fact that learning may be disruptive
or inefficient when stimuli are not reliable.
One cost of learning is the risk of losing old information as new information is gained, a
problem known as catastrophic forgetting. Evolving individuals facing a task with potential
for catastrophic forgetting, it is demonstrated how the addition of an evolutionary cost
of neural connections leads to more modular networks, which forget old skills less when
learning a new skill.
Together, the findings herein demonstrate several ways to handle the so-called stabilityplasticity
dilemma: how can an individual be realized which has the flexibility to adapt
without risking unstable behaviors and forgetting of old skills? The findings suggest ways
in which evolution may have solved this problem in natural learners, and ways to harness
the powers of evolution to mitigate this problem in artificial agents. | nb_NO |