Vis enkel innførsel

dc.contributor.authorMadsen, Andreas Solnørdal
dc.contributor.authorBrandsæter, Andreas
dc.contributor.authorAarset, Magne Vollan
dc.date.accessioned2024-01-18T09:03:13Z
dc.date.available2024-01-18T09:03:13Z
dc.date.created2023-09-05T09:26:02Z
dc.date.issued2023
dc.identifier.isbn978-1-958651-69-8
dc.identifier.urihttps://hdl.handle.net/11250/3112367
dc.description.abstractMaritime Autonomous Surface Ships (MASS) are quickly emerging as a gamechanging technology in various parts of the world. They can be used for a wide range of applications, including cargo transportation, oceanographic research and military operations. One of the main challenges associated with MASS is the need to build trust and confidence in the systems among end-users. While the use of AI and algorithms can lead to more efficient and effective decision-making, humans are often reticent to rely on systems that they do not fully understand. The lack of transparency and interpretability makes it very difficult for the human operator to know when an intervention is appropriate. This is why it is crucial that the decision-making process of MASS is transparent and easily interpretable for human operators and supervisors. In the emerging field of eXplainable AI (XAI), various techniques are developed and designed to help explain the predictions and decisions made by the AI system. How useful these techniques are in a real-world MASS operation is, however, currently an open question. This calls for research with a holistic approach that takes into account not only the technical aspects of MASS, but also the human factors that are involved in their operation. To address this challenge, this study employs a simulator-based approach were navigators test a mock-up system in a full mission navigation simulator. Enhanced decision support was presented on an Electronic Chart Display & Information System (ECDIS) together with information of the approaching ships as AIS (Automatic Identification System) symbols. The decision support provided by the system was a suggested sailing route with waypoints to either make a manoeuvre to avoid collision, or to maintain course and speed according to the Convention of the International Regulations for Preventing Collisions at Sea (COLREG). After completing the scenarios, the navigators were asked about the system’s trustworthiness and interpretability. Further, we explored the needs for transparency and explainability. In addition, the navigators gave suggestions on how to improve the decision support based on the mentioned traits. The findings from the assessment can be used to develop a strategic plan for AI decision transparency. Such a plan would help building trust in MASS systems and improve human-machine collaboration in the maritime industry.en_US
dc.language.isoengen_US
dc.publisherAHFE Open Accessen_US
dc.relation.ispartofHuman Factors in Robots, Drones and Unmanned Systems
dc.titleDecision Transparency for enhanced human-machine collaboration for autonomous shipsen_US
dc.title.alternativeDecision Transparency for enhanced human-machine collaboration for autonomous shipsen_US
dc.typeChapteren_US
dc.description.versionpublishedVersionen_US
dc.source.pagenumber76-84en_US
dc.identifier.cristin2172345
cristin.ispublishedtrue
cristin.fulltextoriginal


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel