Vis enkel innførsel

dc.contributor.advisorÖztürk, Pinar
dc.contributor.authorÅsgård, Fredrik
dc.date.accessioned2016-04-12T14:01:12Z
dc.date.available2016-04-12T14:01:12Z
dc.date.created2015-07-03
dc.date.issued2015
dc.identifierntnudaim:13811
dc.identifier.urihttp://hdl.handle.net/11250/2385341
dc.description.abstractAn opportunistic agent need not only to identify, learn to recognize and to exploit opportunities. This is of particular interest in complex environments, where an agent is unable to attain full overview of the situation. Real-world environments are riddled with uncertainty, there are changes taking place everywhere, agents have severely limited observability, and there is no realistic way to evaluate all possible states/actions. We adapt a proven model of temporarily suspending goals (instead of permanently discarding them), should the goals constraints become invalidated. We propose a conceptual framework that uses reinforcement learning on observations in a partially observable Markov decision process for learning to recognize future opportunities. The learned opportunities are combined with partial-specification planning in order to enable an agent to achieve its goals.
dc.languageeng
dc.publisherNTNU
dc.subjectDatateknologi, Intelligente systemer
dc.titleRecognizing and Learning Opportunities in Complex and Dynamic Environments
dc.typeMaster thesis
dc.source.pagenumber50


Tilhørende fil(er)

Thumbnail
Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel