オカモト タカヒロ   Okamoto Takahiro
  岡本 高宏
   所属   医学部 医学科(東京女子医科大学病院)
   職種   教授・基幹分野長
論文種別 原著
言語種別 英語
査読の有無 査読なし
表題 Automated recognition of objects and types of forceps in surgical images using deep learning.
掲載誌名 正式名:Scientific reports
略  称:Sci Rep
ISSNコード:20452322/20452322
掲載区分国外
巻・号・頁 11(1),pp.22571
著者・共著者 BAMBA Yoshiko†, OGAWA Shimpei, ITABASHI Michio, KAMEOKA Shingo, OKAMOTO Takahiro, YAMAMOTO Masakazu
担当区分 最終著者
発行年月 2021/11
概要 Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations.
DOI 10.1038/s41598-021-01911-1
PMID 34799625