Vis enkel innførsel

dc.contributor.authorMurad, Abdulmajid
dc.contributor.authorKraemer, Frank Alexander
dc.contributor.authorBach, Kerstin
dc.contributor.authorTaylor, Gavin
dc.date.accessioned2020-10-22T10:24:46Z
dc.date.available2020-10-22T10:24:46Z
dc.date.created2020-10-07T13:12:07Z
dc.date.issued2020
dc.identifier.isbn9781450387583
dc.identifier.urihttps://hdl.handle.net/11250/2684427
dc.description.abstractIn order to make better use of deep reinforcement learning in the creation of sensing policies for resource-constrained IoT devices, we present and study a novel reward function based on the Fisher information value. This reward function enables IoT sensor devices to learn to spend available energy on measurements at otherwise unpredictable moments, while conserving energy at times when measurements would provide little new information. This is a highly general approach, which allows for a wide range of use cases without significant human design effort or hyperparameter tuning. We illustrate the approach in a scenario of workplace noise monitoring, where results show that the learned behavior outperforms a uniform sampling strategy and comes close to a near-optimal oracle solution.en_US
dc.language.isoengen_US
dc.publisherAssociation for Computing Machinery (ACM)en_US
dc.relation.ispartofIoT '20: Proceedings of the 10th International Conference on the Internet of Things
dc.titleInformation-Driven Adaptive Sensing Based on Deep Reinforcement Learningen_US
dc.typeChapteren_US
dc.description.versionacceptedVersionen_US
dc.identifier.doi10.1145/3410992.3411001
dc.identifier.cristin1837920
dc.description.localcode© ACM, 2020. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published here, 10.1145/3410992.3411001en_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel