Vis enkel innførsel

dc.contributor.authorMohammed, Ahmed Kedir
dc.contributor.authorYildirim Yayilgan, Sule
dc.contributor.authorFarup, Ivar
dc.contributor.authorPedersen, Marius
dc.contributor.authorHovde, Øistein
dc.date.accessioned2019-11-27T08:38:33Z
dc.date.available2019-11-27T08:38:33Z
dc.date.created2019-08-21T16:18:05Z
dc.date.issued2019
dc.identifier.issn1605-7422
dc.identifier.urihttp://hdl.handle.net/11250/2630676
dc.description.abstractSurgical robot technology has revolutionized surgery toward a safer laparoscopic surgery and ideally been suited for surgeries requiring minimal invasiveness. Sematic segmentation from robot-assisted surgery videos is an essential task in many computer-assisted robotic surgical systems. Some of the applications include instrument detection, tracking and pose estimation. Usually, the left and right frames from the stereoscopic surgical instrument are used for semantic segmentation independently from each other. However, this approach is prone to poor segmentation since the stereo frames are not integrated for accurate estimation of the surgical scene. To cope with this problem, we proposed a multi encoder and single decoder convolutional neural network named StreoScenNet which exploits the left and right frames of the stereoscopic surgical system. The proposed architecture consists of multiple ResNet encoder blocks and a stacked convolutional decoder network connected with a novel sum-skip connection. The input to the network is a set of left and right frames and the output is a mask of the segmented regions for the left frame. It is trained end-to-end and the segmentation is achieved without the need of any pre- or post-processing. We compare the proposed architectures against state-of-the-art fully convolutional networks. We validate our methods using existing benchmark datasets that includes robotic instruments as well as anatomical objects and non-robotic surgical instruments. Compared with the previous instrument segmentation methods, our approach achieves a significant improved Dice similarity coefficient.nb_NO
dc.language.isoengnb_NO
dc.publisherSociety of Photo-optical Instrumentation Engineers (SPIE)nb_NO
dc.titleStreoScenNet: Surgical stereo robotic scene segmentationnb_NO
dc.typeJournal articlenb_NO
dc.typePeer reviewednb_NO
dc.description.versionpublishedVersionnb_NO
dc.source.volume1095:109510P1nb_NO
dc.source.journalProgress in Biomedical Optics and Imagingnb_NO
dc.identifier.doi10.1117/12.2512518
dc.identifier.cristin1717802
dc.relation.projectNorges forskningsråd: 247689nb_NO
dc.description.localcode© 2019 Society of Photo Optical Instrumentation Engineers. One print or electronic copy may be made for personal use only. Systematic reproduction and distribution, duplication of any material in this paper for a fee or for commercial purposes, or modification of the content of the paper are prohibited.nb_NO
cristin.unitcode194,63,10,0
cristin.unitcode194,63,30,0
cristin.unitnameInstitutt for datateknologi og informatikk
cristin.unitnameInstitutt for informasjonssikkerhet og kommunikasjonsteknologi
cristin.ispublishedtrue
cristin.fulltextpreprint
cristin.qualitycode1


Tilhørende fil(er)

Thumbnail

Denne innførselen finnes i følgende samling(er)

Vis enkel innførsel