Automated Quantification of Human Osteoclasts Using Object Detection
Kohtala, Sampsa; Nedal, Tonje Marie Vikene; Carlo, Kriesi; Moen, Siv Helen; Ma, Qianli; Ødegaard, Kristin Sirnes; Standal, Therese; Steinert, Martin
Peer reviewed, Journal article
Published version
Åpne
Permanent lenke
https://hdl.handle.net/11250/3003824Utgivelsesdato
2022Metadata
Vis full innførselSamlinger
Originalversjon
10.3389/fcell.2022.941542Sammendrag
A balanced skeletal remodeling process is paramount to staying healthy. The remodeling process can be studied by analyzing osteoclasts differentiated in vitro from mononuclear cells isolated from peripheral blood or from buffy coats. Osteoclasts are highly specialized, multinucleated cells that break down bone tissue. Identifying and correctly quantifying osteoclasts in culture are usually done by trained personnel using light microscopy, which is time-consuming and susceptible to operator biases. Using machine learning with 307 different well images from seven human PBMC donors containing a total of 94,974 marked osteoclasts, we present an efficient and reliable method to quantify human osteoclasts from microscopic images. An open-source, deep learning-based object detection framework called Darknet (YOLOv4) was used to train and test several models to analyze the applicability and generalizability of the proposed method. The trained model achieved a mean average precision of 85.26% with a correlation coefficient of 0.99 with human annotators on an independent test set and counted on average 2.1% more osteoclasts per culture than the humans. Additionally, the trained models agreed more than two independent human annotators, supporting a more reliable and less biased approach to quantifying osteoclasts while saving time and resources. We invite interested researchers to test their datasets on our models to further strengthen and validate the results. Automated Quantification of Human Osteoclasts Using Object Detection