Downloads: 81

Files in This Item:
File Description SizeFormat 
abe.11.37.pdf2.81 MBAdobe PDFView/Open
Title: Recognition of Instrument Passing and Group Attention for Understanding Intraoperative State of Surgical Team
Authors: Yokoyama, Koji
Yamamoto, Goshiro  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0002-2014-7195 (unconfirmed)
Liu, Chang
Sugiyama, Osamu
Santos, Luciano HO
Kuroda, Tomohiro  kyouindb  KAKEN_id  orcid https://orcid.org/0000-0003-1472-7203 (unconfirmed)
Author's alias: 横山, 晃士
山本, 豪志朗
劉, 暢
杉山, 治
黒田, 知宏
Keywords: group activity
surgical team
intraoperative state
video analysis
pose estimation
Issue Date: 2022
Publisher: Japanese Society for Medical and Biological Engineering
Journal title: Advanced Biomedical Engineering
Volume: 11
Start page: 37
End page: 47
Abstract: Appropriate evaluation of the intraoperative state of a surgical team is essential for the improvement of teamwork and hence a safe surgical environment. Traditional methods to evaluate intraoperative team states such as interview and self-check questionnaire on each surgical team member often require human efforts, which are time-consuming and can be biased by individual recall. One effective solution is to analyze the surgical video and track the important team activities, such as whether the members are complying with the surgical procedure or are being distracted by unexpected events. However, due to the complexity of the situations in an operating room, identifying the team activities without any human effort remains challenging. In this work, we propose a novel approach that automatically recognizes and quantifies intraoperative activities from surgery videos. As a first step, we focus on recognizing two activities that especially involve multiple individuals: (a) passing of clean-packaged surgery instruments which is a representative interaction between the surgical technologists such as the circulating nurse and scrub nurse, and (b) group attention that may be attracted by unexpected events. We record surgical videos as input, and apply pose estimation and particle filters to extract individual's face orientation, body orientation, and arm raise. These results coupled with individual IDs are then sent to an estimation model that provides the probability of each target activity. Simultaneously, a person model is generated and bound to each individual, which describes all the involved activities along the timeline. We tested our method using videos of simulated activities. The results showed that the system was able to recognize instrument passing and group attention with F1 = 0.95 and F1 = 0.66, respectively. We also implemented a system with an interface that automatically annotated intraoperative activities along the video timeline, and invited feedback from surgical technologists. The results suggest that the quantified and visualized activities can help improve understanding of the intraoperative state of the surgical team.
Rights: Copyright: ©2022 The Author(s).
This is an open access article distributed under the terms of the Creative Commons BY 4.0 International (Attribution) License, which permits the unrestricted distribution, reproduction and use of the article provided the original source and authors are credited.
URI: http://hdl.handle.net/2433/278465
DOI(Published Version): 10.14326/abe.11.37
Appears in Collections:Journal Articles

Show full item record

Export to RefWorks


Export Format: 


This item is licensed under a Creative Commons License Creative Commons