Drone deep reinforcement learning: A review
Azar, Ahmad Taher; Koubaa, Anis; Ali Mohamed, Nada; Ibrahim, Habiba A.; Ibrahim, Zahra Fathy; Kazim, Muhammad; Ammar, Adel; Benjdira, Bilel; Khamis, Alaa M.; Hameed, Ibrahim A.; Casalino, Gabriella
Peer reviewed, Journal article
Published version

Åpne
Permanent lenke
https://hdl.handle.net/11250/3044017Utgivelsesdato
2021Metadata
Vis full innførselSamlinger
- Institutt for IKT og realfag [672]
- Publikasjoner fra CRIStin - NTNU [40774]
Sammendrag
Unmanned Aerial Vehicles (UAVs) are increasingly being used in many challenging and diversified applications. These applications belong to the civilian and the military fields. To name a few; infrastructure inspection, traffic patrolling, remote sensing, mapping, surveillance, rescuing humans and animals, environment monitoring, and Intelligence, Surveillance, Target Acquisition, and Reconnaissance (ISTAR) operations. However, the use of UAVs in these applications needs a substantial level of autonomy. In other words, UAVs should have the ability to accomplish planned missions in unexpected situations without requiring human intervention. To ensure this level of autonomy, many artificial intelligence algorithms were designed. These algorithms targeted the guidance, navigation, and control (GNC) of UAVs. In this paper, we described the state of the art of one subset of these algorithms: the deep reinforcement learning (DRL) techniques. We made a detailed description of them, and we deduced the current limitations in this area. We noted that most of these DRL methods were designed to ensure stable and smooth UAV navigation by training computer-simulated environments. We realized that further research efforts are needed to address the challenges that restrain their deployment in real-life scenarios.