イタバシ ミチオ
Itabashi Michio
板橋 道朗 所属 医学部 医学科 職種 特任教授 |
|
論文種別 | 原著 |
言語種別 | 英語 |
査読の有無 | 査読なし |
表題 | Object and anatomical feature recognition in surgical video images based on a convolutional neural network. |
掲載誌名 | 正式名:International journal of computer assisted radiology and surgery 略 称:Int J Comput Assist Radiol Surg ISSNコード:18616429/18616410 |
掲載区分 | 国外 |
巻・号・頁 | 16(11),pp.2045-2054 |
著者・共著者 | BAMBA Yoshiko†, OGAWA Shimpei, ITABASHI Michio, SHINDO Hironari, KAMEOKA Shingo, Okamoto Takahiro, YAMAMOTO Masakazu |
発行年月 | 2021/11 |
概要 | PURPOSE:Artificial intelligence-enabled techniques can process large amounts of surgical data and may be utilized for clinical decision support to recognize or forecast adverse events in an actual intraoperative scenario. To develop an image-guided navigation technology that will help in surgical education, we explored the performance of a convolutional neural network (CNN)-based computer vision system in detecting intraoperative objects.METHODS:The surgical videos used for annotation were recorded during surgeries conducted in the Department of Surgery of Tokyo Women's Medical University from 2019 to 2020. Abdominal endoscopic images were cut out from manually captured surgical videos. An open-source programming framework for CNN was used to design a model that could recognize and segment objects in real time through IBM Visual Insights. The model was used to detect the GI tract, blood, vessels, uterus, forceps, ports, gauze and clips in the surgical images.RESULTS:The accuracy, precision and recall of the model were 83%, 80% and 92%, respectively. The mean average precision (mAP), the calculated mean of the precision for each object, was 91%. Among surgical tools, the highest recall and precision of 96.3% and 97.9%, respectively, were achieved for forceps. Among the anatomical structures, the highest recall and precision of 92.9% and 91.3%, respectively, were achieved for the GI tract.CONCLUSION:The proposed model could detect objects in operative images with high accuracy, highlighting the possibility of using AI-based object recognition techniques for intraoperative navigation. Real-time object recognition will play a major role in navigation surgery and surgical education. |
DOI | 10.1007/s11548-021-02434-w |
PMID | 34169465 |