Vis enkel innførsel

dc.contributor.authorAuflem, Marius
dc.contributor.authorKohtala, Sampsa Matias Ilmari
dc.contributor.authorJung, Malte
dc.contributor.authorSteinert, Ralf Martin
dc.date.accessioned2022-12-27T12:27:12Z
dc.date.available2022-12-27T12:27:12Z
dc.date.created2022-06-14T12:18:10Z
dc.date.issued2022
dc.identifier.issn2296-9144
dc.identifier.urihttps://hdl.handle.net/11250/3039524
dc.description.abstractThis paper presents a new approach for evaluating and controlling expressive humanoid robotic faces using open-source computer vision and machine learning methods. Existing research in Human-Robot Interaction lacks flexible and simple tools that are scalable for evaluating and controlling various robotic faces; thus, our goal is to demonstrate the use of readily available AI-based solutions to support the process. We use a newly developed humanoid robot prototype intended for medical training applications as a case example. The approach automatically captures the robot’s facial action units through a webcam during random motion, which are components traditionally used to describe facial muscle movements in humans. Instead of manipulating the actuators individually or training the robot to express specific emotions, we propose using action units as a means for controlling the robotic face, which enables a multitude of ways to generate dynamic motion, expressions, and behavior. The range of action units achieved by the robot is thus analyzed to discover its expressive capabilities and limitations and to develop a control model by correlating action units to actuation parameters. Because the approach is not dependent on specific facial attributes or actuation capabilities, it can be used for different designs and continuously inform the development process. In healthcare training applications, our goal is to establish a prerequisite of expressive capabilities of humanoid robots bounded by industrial and medical design constraints. Furthermore, to mediate human interpretation and thus enable decision-making based on observed cognitive, emotional, and expressive cues, our approach aims to find the minimum viable expressive capabilities of the robot without having to optimize for realism. The results from our case example demonstrate the flexibility and efficiency of the presented AI-based solutions to support the development of humanoid facial robots.en_US
dc.language.isoengen_US
dc.publisherFrontiersen_US
dc.rightsNavngivelse 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/deed.no*
dc.titleFacing the FACS—Using AI to Evaluate and Control Facial Action Units in Humanoid Robot Face Developmenten_US
dc.title.alternativeFacing the FACS—Using AI to Evaluate and Control Facial Action Units in Humanoid Robot Face Developmenten_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionpublishedVersionen_US
dc.source.journalFrontiers in Robotics and AIen_US
dc.identifier.doi10.3389/frobt.2022.887645
dc.identifier.cristin2031719
dc.relation.projectNorges forskningsråd: 290404en_US
cristin.ispublishedtrue
cristin.fulltextoriginal
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Navngivelse 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Navngivelse 4.0 Internasjonal