Vis enkel innførsel

dc.contributor.authorGómez-Carmona, Oihane
dc.contributor.authorCasado Mansilla, Diego
dc.contributor.authorKraemer, Frank Alexander
dc.contributor.authorLopez-de-Ipina, Diego
dc.contributor.authorGarcia-Zubia, Javier
dc.date.accessioned2021-03-01T08:29:47Z
dc.date.available2021-03-01T08:29:47Z
dc.date.created2020-06-10T22:11:50Z
dc.date.issued2020
dc.identifier.citationFuture generations computer systems. 2020, 112 670-683.en_US
dc.identifier.issn0167-739X
dc.identifier.urihttps://hdl.handle.net/11250/2730784
dc.description.abstractIn response to users’ demand for privacy, trust and control over their data, executing machine learning tasks at the edge of the system has the potential to make the Internet of Things (IoT) applications and services more human-centric. This implies moving complex computation to a local stage, where edge devices must balance the computational cost of the machine learning techniques to meet the available resources. Thus, in this paper, we analyze all the factors affecting the classification process and empirically evaluate their impact in terms of performance and cost. We put the focus on Human Activity Recognition (HAR) systems, which represent a standard type of classification problems in human-centered IoT applications. We present a holistic optimization approach through input data reduction and feature engineering that aims to enhance all the stages of the classification pipeline and integrate both inference and training at the edge. The results of the conducted evaluation show that there is a highly non-linear trade-off to make between the computational cost, in terms of processing time, and the achieved classification accuracy. In the presented case of study, the computational effort can be reduced by 80% assuming a decline of the classification accuracy of only 3%. The potential impact of the optimization strategy highlights the importance of understanding the initial data and studying the most relevant characteristics of the signal to meet the cost–accuracy requirements. This would contribute to bringing embedded machine learning to the edge and, hence, creating spaces where human and machine intelligence could collaborate.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 Internasjonal*
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/deed.no*
dc.titleExploring the Computational Cost of Machine Learning at the Edge for Human-Centric Internet of Thingsen_US
dc.typePeer revieweden_US
dc.typeJournal articleen_US
dc.description.versionacceptedVersionen_US
dc.source.pagenumber670-683en_US
dc.source.volume112en_US
dc.source.journalFuture generations computer systemsen_US
dc.identifier.doi10.1016/j.future.2020.06.013
dc.identifier.cristin1814935
dc.description.localcode"© 2020. This is the authors’ accepted and refereed manuscript to the article. Locked until 15.6.2022 due to copyright restrictions. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ "en_US
cristin.ispublishedtrue
cristin.fulltextpostprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel

Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal
Med mindre annet er angitt, så er denne innførselen lisensiert som Attribution-NonCommercial-NoDerivatives 4.0 Internasjonal