ムラガキ ヨシヒロ
Muragaki Yoshihiro
村垣 善浩 所属 医学部 医学科(東京女子医科大学病院) 職種 客員教授 |
|
論文種別 | 原著 |
言語種別 | フランス語 |
査読の有無 | 査読なし |
表題 | Classification of Speech Arrests and Speech Impairments during Awake Craniotomy: a multi-databases analysis |
掲載誌名 | 正式名:(Research Square) |
掲載区分 | 国外 |
出版社 | This work is licensed under a CC BY 4.0 License |
巻・号・頁 | Preprint頁 |
著者・共著者 | MAOUDJ Ilias†, KUWANO Atsushi, PANHELEUX Céline , KUBOTA Yuichi, KAWAMATA Takakazu, MURAGAKI Yoshihiro, MASAMUNE Ken, SEIZEUR Romuald , DARDENNE Guillaume, TAMURA Manabu |
発行年月 | 2024/05/09 |
概要 | Purpose: Awake craniotomy presents a unique opportunity to map and pre-
serve critical brain functions, particularly speech, during tumor resection. The ability to accurately assess linguistic functions in real-time not only enhances 1 surgical precision but also contributes significantly to improving postoperative outcomes. However, today, its evaluation is subjective as it relies on a clinician’s observations only. This paper explores the use of a deep learning based model for the objective assessment of speech arrest and speech impairments during awake craniotomy. Methods: We extracted 1883 3-second audio clips contain- ing the patient’s response following Direct Electrical Stimulation from 23 awake craniotomies recorded from two operating rooms of the Tokyo Women’s Medical University Hospital (Japan) and 2 awake craniotomies recorded from the Uni- versity Hospital of Brest (France). A Wav2Vec2-based model has been trained and used to detect speech arrests and speech impairments. Experiments were performed with different datasets settings and preprocessing techniques and the performances of the model were evaluated using the F1-score. Results: The F1-score was 84.12% when the model was trained and tested on Japanese data only. In a cross-language situation, the F1-score was 74.68% when the model was trained on Japanese data and tested on French data. Conclusion: The results are encouraging even in a cross-language situation but further evaluation is required. The integration of preprocessing techniques, in particular noise reduction, improved the results significantly. |
DOI | 10.21203/rs.3.rs-4359067/v2 |