Vis enkel innførsel

dc.contributor.advisorLangseth, Helge
dc.contributor.advisorCastejon, Humberto
dc.contributor.advisorChandra, Arjun
dc.contributor.authorClemente, Alfredo Vicente
dc.date.accessioned2017-11-20T15:00:51Z
dc.date.available2017-11-20T15:00:51Z
dc.date.created2017-08-19
dc.date.issued2017
dc.identifierntnudaim:17906
dc.identifier.urihttp://hdl.handle.net/11250/2467208
dc.description.abstractThis thesis explores the exciting new field of deep reinforcement learning (Deep RL). This field combines well known reinforcement learning algorithms with newly developed deep learning algorithms. With Deep RL it is possible to train agents that can perform well in their environment, without the need for prior knowledge. Deep RL agents are able to learn solely by the low level percepts, such as vision and sound, they observe when interacting with the environment. Combining deep learning and reinforcement learning is not an easy task, and many different methods have been proposed. In this thesis I explore a novel method for combining these two techniques that matches the performance of a state of the art deep reinforcement learning algorithm in the Atari domain for the game of Pong, while requiring fewer samples.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi (2 årig), Kunstig intelligens
dc.titleDecoupling deep learning and reinforcement learning for stable and efficient deep policy gradient algorithms
dc.typeMaster thesis


Tilhørende fil(er)

Thumbnail
Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel