Recognizing and Learning Opportunities in Complex and Dynamic Environments
MetadataShow full item record
An opportunistic agent need not only to identify, learn to recognize and to exploit opportunities. This is of particular interest in complex environments, where an agent is unable to attain full overview of the situation. Real-world environments are riddled with uncertainty, there are changes taking place everywhere, agents have severely limited observability, and there is no realistic way to evaluate all possible states/actions. We adapt a proven model of temporarily suspending goals (instead of permanently discarding them), should the goals constraints become invalidated. We propose a conceptual framework that uses reinforcement learning on observations in a partially observable Markov decision process for learning to recognize future opportunities. The learned opportunities are combined with partial-specification planning in order to enable an agent to achieve its goals.