Show simple item record

dc.contributor.advisorKraemer, Frank Alexander
dc.contributor.advisorBach, Kerstin
dc.contributor.advisorTaylor, Gavin
dc.contributor.authorMurad, Abdulmajid
dc.date.accessioned2023-03-08T12:01:17Z
dc.date.available2023-03-08T12:01:17Z
dc.date.issued2023
dc.identifier.isbn978-82-326-6974-5
dc.identifier.issn2703-8084
dc.identifier.urihttps://hdl.handle.net/11250/3057037
dc.description.abstractThe goal of many Internet of Things (IoT) sensing applications, such as environmental monitoring, is to support decision-making by providing valuable information about various phenomena. One approach to achieve this goal is to deploy a network of wireless sensors that collect and transmit measurements of the phenomena. However, typical wireless sensors have limited energy, computation, and communication resources, making it infeasible to measure continuously at a high rate. We can use efficient sensing techniques to address these resource constraints by selecting only the most informative measurements and avoiding wasting resources on redundant or least informative measurements. However, the design of such sensing techniques often involves manually fine-tuned heuristics or algorithms that are task-specific and non-adaptive. This approach is not scalable to the IoT setting, where a large number of sensors are deployed in dynamic and non-stationary environments. There is a need for more autonomous approaches to achieve scalable sensing solutions, where individual sensors autonomously learn strategies suitable for their specific environments and capabilities. Therefore, we investigate the use of deep reinforcement learning to develop autonomous sensing solutions. We first address the challenge of computational complexity by building a general-purpose simulator to train the behavior of sensing agents. We demonstrate the feasibility of using deep reinforcement learning to achieve autonomous sensing through three different use cases. We then explore using various reward functions, including utility-based, information-based, and value-based rewards, to align with application goals and guide learning toward desired behaviors. We finally show that explicitly representing our understanding about a phenomenon can guide the autonomous sensing agent in selecting informative measurements that maximally inform our prior knowledge of the phenomenon. Overall, the results of this work show that deep reinforcement learning can lead to increased autonomy and, thus, scalability in IoT sensing applications while achieving comparable performance to conventional methods.en_US
dc.language.isoengen_US
dc.publisherNTNUen_US
dc.relation.ispartofseriesDoctoral theses at NTNU;2023:64
dc.relation.haspartPaper 1: Murad, Abdulmajid Abdullah Yahya; Kraemer, Frank Alexander; Bach, Kerstin; Taylor, Gavin. Autonomous Management of Energy-Harvesting IoT Nodes Using Deep Reinforcement Learning. I: 2019 IEEE 13th International Conference on Self-Adaptive and Self-Organizing Systems (SASO). IEEE 2019s. 43-51 https://doi.org/10.1109/SASO.2019.00015en_US
dc.relation.haspartPaper 2: Murad, Abdulmajid Abdullah Yahya; Kraemer, Frank Alexander; Bach, Kerstin; Taylor, Gavin. IoT Sensor Gym: Training Autonomous IoT Devices with Deep Reinforcement Learning. I: 9th International Conference on the Internet of Things (IoT 2019), October 22--25, 2019, Bilbao, Spain. Association for Computing Machinery (ACM) 2019 https://doi.org/10.1145/3365871.3365911en_US
dc.relation.haspartPaper 3: Murad, Abdulmajid; Kraemer, Frank Alexander; Bach, Kerstin; Taylor, Gavin. Information-Driven Adaptive Sensing Based on Deep Reinforcement Learning. I: IoT '20: Proceedings of the 10th International Conference on the Internet of Things. Association for Computing Machinery (ACM) 2020 https://doi.org/10.1145/3410992.3411001en_US
dc.relation.haspartPaper 4: Murad, Abdulmajid; Kraemer, Frank Alexander; Bach, Kerstin; Taylor, Gavin. Probabilistic Deep Learning to Quantify Uncertainty in Air Quality Forecasting. Sensors 2021 ;Volum 21.(23) https://doi.org/10.3390/s21238009 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).en_US
dc.relation.haspartPaper 5: Murad, Abdulmajid; Kraemer, Frank Alexander; Bach, Kerstin; Taylor, Gavin. Uncertainty-Aware Autonomous Sensing with Deep Reinforcement Learingen_US
dc.relation.haspartPaper 6: Klemsdal, Even; Herland, Sverre; Murad, Abdulmajid. Learning Task Agnostic Skills with Data-driven Guidance. ICML 2021 Workshop on Unsupervised Reinforcement Learningen_US
dc.titleUncertainty-Aware Autonomous Sensing with Deep Reinforcement Learningsen_US
dc.typeDoctoral thesisen_US
dc.subject.nsiVDP::Technology: 500::Information and communication technology: 550en_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record