Hostname: page-component-848d4c4894-hfldf Total loading time: 0 Render date: 2024-05-14T20:02:24.365Z Has data issue: false hasContentIssue false

Autonomous sequential surgical skills assessment for the peg transfer task in a laparoscopic box-trainer system with three cameras

Published online by Cambridge University Press:  03 March 2023

Fatemeh Rashidi Fathabadi*
Affiliation:
Department of Electrical & Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
Janos L. Grantner
Affiliation:
Department of Electrical & Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
Saad A Shebrain
Affiliation:
Homer Stryker M.D. School of Medicine, Western Michigan University, Kalamazoo, MI, USA
Ikhlas Abdel-Qader
Affiliation:
Department of Electrical & Computer Engineering, Western Michigan University, Kalamazoo, MI, USA
*
*Corresponding author. E-mail: fatemeh.rashidifathabadi@wmich.edu

Abstract

In laparoscopic surgery, surgeons should develop several manual laparoscopic skills before carrying out real operative procedures using a low-cost box trainer. The Fundamentals of Laparoscopic Surgery (FLS) program was developed as a program to assess fundamental knowledge and surgical skills, required for basic laparoscopic surgery. The peg transfer task is a hands-on exam in the FLS program that assists a trainee to understand the relative minimum amount of grasping force necessary to move the pegs from one place to another place without dropping them. In this paper, an autonomous, sequential assessment algorithm based on deep learning, a multi-object detection method, and, several sequential If-Then conditional statements have been developed to monitor each step of a surgeon’s performance. Images from three different cameras are used to assess whether the surgeon executes the peg transfer task correctly and to display a notification on any errors on the monitor immediately. This algorithm improves the performance of a laparoscopic box-trainer system using top, side, and front cameras and removes the need for any human monitoring during a peg transfer task. The developed algorithm can detect each object and its status during a peg transfer task and notifies the resident about the correct or failed outcome. In addition, this system can correctly determine the peg transfer execution time, and the move, carry, and dropped states for each object by the top, side, and front-mounted cameras. Based on the experimental results, the proposed surgical skill assessment system can identify each object at a high score of fidelity, and the train-validation total loss for the single-shot detector (SSD) ResNet50 v1 was about 0.05. Also, the mean average precision (mAP) and Intersection over Union (IoU) of this detection system were 0.741, and 0.75, respectively. This project is a collaborative research effort between the Department of Electrical and Computer Engineering and the Department of Surgery, at Western Michigan University.

Type
Research Article
Copyright
© The Author(s), 2023. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Peng, J., Chen, Q., Kang, L., Jie, H. and Han, Y., “Autonomous recognition of multiple surgical instruments tips based on arrow OBB-YOLO network,” IEEE Trans. Instrum. Meas. 71 (2022).CrossRefGoogle Scholar
Alkhamaiseh, K. N., Grantner, J. L., Shebrain, S. and Abdel-Oader, I., “Towards Automated Performance Assessment for Laparoscopic Box Trainer using Cross-Stage Partial Network,” In: 2021 Digital Image Computing: Techniques and Applications (DICTA) (2021) pp. 17.Google Scholar
Ohtake, S., Makiyama, K., Yamashita, D., Tatenuma, T. and Yao, M., “Training on a virtual reality laparoscopic simulator improves performance of live laparoscopic surgery,” Asian J. Endosc. Surg. 15(2), 313319 (2022).CrossRefGoogle ScholarPubMed
Hiyoshi, Y., Miyamoto, Y., Akiyama, T., Daitoku, N., Sakamoto, Y., Tokunaga, R., Eto, K., Nagai, Y., Iwatsuki, M., Iwagami, S., Baba, Y., Yoshida, N. and Baba, H., “Time trial of dry box laparoscopic surgical training improves laparoscopic surgical skills and surgical outcomes,” Asian J. Endosc. Surg. 14(3), 373378 (2021).CrossRefGoogle ScholarPubMed
Davids, J., Makariou, S.-G., Ashrafian, H., Darzi, A., Marcus, H. J. and Giannarou, S., “Automated vision-based microsurgical skill analysis in neurosurgery using deep learning: Development and preclinical validation,” World Neurosurg. 149, e669e686 (2021).CrossRefGoogle ScholarPubMed
Maciel, A., Liu, Y., Ahn, W., Singh, T. P., Dunnican, W. and De, S., “Development of the VBLaSTTM: A virtual basic laparoscopic skill trainer,” Int J. Med. Robot. Comput. Assist. Surg. 4(2), 131138 (2008).CrossRefGoogle ScholarPubMed
Kirsten, O. J. Y., Andy, T. W. K., Wei, D. F. C. and Jason, L. S. K., “Simulation in surgical education: A review of a multi-tiered gynaecological laparoscopy workshop,” South-East Asian J. Med. Educ. 15(1), 34 (2021).CrossRefGoogle Scholar
Matsumoto, S., Kawahira, H., Oiwa, K., Maeda, Y., Nozawa, A., Lefor, A. K., Hosoya, Y. and Sata, N., “Laparoscopic surgical skill evaluation with motion capture and eyeglass gaze cameras: A pilot study,” Asian J. Endosc. Surg. 15(3), 619628 (2022).CrossRefGoogle ScholarPubMed
Ahmidi, N., Poddar, P., Jones, J. D., Vedula, S. S., Ishii, L., Hager, G. D. and Ishii, M., “Automated objective surgical skill assessment in the operating room from unstructured tool motion in septoplasty,” Int J. Comput. Assist. Radiol. Surg. 10(6), 981991 (2015).CrossRefGoogle ScholarPubMed
Pagador, J. B., Sánchez-Margallo, F. M., Sánchez-Peralta, L. F., Sánchez-Margallo, J. A., Moyano-Cuevas, J. L., Enciso-Sanz, S., Usón-Gargallo, J. and Moreno, J., “Decomposition and analysis of laparoscopic suturing task using tool-motion analysis (TMA): Improving the objective assessment,” Int. J. Comput. Assist. Radiol. Surg. 7(2), 305313 (2012).CrossRefGoogle ScholarPubMed
O’Sullivan, S., Nevejans, N., Allen, C., Blyth, A., Leonard, S., Pagallo, U., Holzinger, K., Holzinger, A., Sajid, M. I. and Ashrafian, H., “Legal, regulatory, and ethical frameworks for development of standards in artificial intelligence (AI) and autonomous robotic surgery,” Int J. Med. Robot. Comput. Assist. Surg. 15(1), e1968 (2019).CrossRefGoogle ScholarPubMed
Suzuki, S., Suzuki, N., Hattori, A., Hayashibe, M., Konishi, K., Kakeji, Y. and Hashizume, M., “Tele-surgery simulation with a patient organ model for robotic surgery training,” Int. J. Med. Robot. Comput. Assist. Surg. 1(4), 8088 (2005).CrossRefGoogle ScholarPubMed
Song, K. and Hsieh, P., “Shared-control design for robot-assisted surgery using a skill assessment approach,” Asian J. Control 24(3), 10421058 (2022).CrossRefGoogle Scholar
De Ravin, E., Sell, E. A., Newman, J. G. and Rajasekaran, K., “Medical malpractice in robotic surgery: A Westlaw database analysis,” J. Robot. Surg., 16 (2022).Google ScholarPubMed
Zeinab, M. A. and Kaouk, J., “Novel Technology in Robotic Surgery,” In: Atlas of Robotic, Conventional, and Single-Port Laparoscopy (Springer, Cham, 2022) pp. 247257.CrossRefGoogle Scholar
Sun, L.-W., Van Meer, F., Schmid, J., Bailly, Y., Thakre, A. A. and Yeung, C. K., “Advanced da Vinci surgical system simulator for surgeon training and operation planning,” Int. J. Med. Robot. Comput. Assist. Surg. 3(3), 245251 (2007).CrossRefGoogle Scholar
Qin, F., Lin, S., Li, Y., Bly, R. A., Moe, K. S. and Hannaford, B., “Towards better surgical instrument segmentation in endoscopic vision: Multi-angle feature aggregation and contour supervision,” IEEE Robot. Autom. Lett. 5(4), 66396646 (2020).CrossRefGoogle Scholar
Zhao, Z., Jin, Y., Lu, B., Ng, C.-F., Dou, Q., Liu, Y.-H. and Heng, P.-A., “One to Many: Adaptive Instrument Segmentation via Meta Learning and Dynamic Online Adaptation in Robotic Surgical Video,” In: 2021 IEEE International Conference on Robotics and Automation (ICRA) (2021) pp. 1355313559.CrossRefGoogle Scholar
Nakawala, H., Bianchi, R., Pescatori, L. E., De Cobelli, O., Ferrigno, G. and De Momi, E., “‘Deep-Onto’ network for surgical workflow and context recognition,” Int. J. Comput. Assist. Radiol. Surg. 14(4), 685696 (2019).CrossRefGoogle Scholar
van Amsterdam, B., Clarkson, M. J. and Stoyanov, D., “Gesture recognition in robotic surgery: A review,” IEEE Trans. Biomed. Eng. 68(6), 20212035 (2021).CrossRefGoogle ScholarPubMed
Zhang, J., Nie, Y., Lyu, Y., Yang, X., Chang, J. and Zhang, J. J., “SD-Net: Joint surgical gesture recognition and skill assessment,” Int. J. Comput. Assist. Radiol. Surg. 16(10), 16751682 (2021).CrossRefGoogle ScholarPubMed
Long, Y., Li, Z., Yee, C. H., Ng, C. F., Taylor, R. H., Unberath, M. and Dou, Q., “E-DSSR: Efficient Dynamic Surgical Scene Reconstruction with Transformer-Based Stereoscopic Depth Perception,” In: International Conference on Medical Image Computing and Computer-Assisted Intervention (2021) pp. 415425.Google Scholar
Long, Y., Cao, J., Deguet, A., Taylor, R. H. and Dou, Q., “Integrating Artificial Intelligence and Augmented Reality in Robotic Surgery: An Initial dVRK Study Using a Surgical Education Scenario,” arXiv Preprint. arXiv:2201.00383 (2022).CrossRefGoogle Scholar
Han, J., Davids, J., Ashrafian, H., Darzi, A., Elson, D. and Sodergren, M., “A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches,” Int. J. Med. Robot. Comput. Assist. Surg. 18(2), e2358 (2021).Google ScholarPubMed
Gonzalez, G. T., Kaur, U., Rahman, M., Venkatesh, V., Sanchez, N., Hager, G., Xue, Y., Voyles, R. and Wachs, J., “From the dexterous surgical skill to the battlefield—a robotics exploratory study,” Mil. Med. 186(Supplement_1), 288294 (2021).CrossRefGoogle Scholar
Rosen, J. and Ma, J., “Autonomous operation in surgical robotics,” Mech. Eng. 137(9), S15S18 (2015).CrossRefGoogle Scholar
Hannaford, B., Rosen, J., Friedman, D. W., King, H., Roan, P., Lei Cheng, , Glozman, D., Ji Ma, , Kosari, S. N. and White, L., “Raven-II: An open platform for surgical robotics research,” IEEE Trans. Biomed. Eng. 60(4), 954959 (2012).CrossRefGoogle ScholarPubMed
Hwang, M., Seita, D., Thananjeyan, B., Ichnowski, J., Paradis, S., Fer, D., Low, T. and Goldberg, K. Y., “Applying Depth-Sensing to Automated Surgical Manipulation with a da Vinci Robot,” In: 2020 International Symposium on Medical Robotics (ISMR) (2020) pp. 2229.Google Scholar
Hwang, M., Thananjeyan, B., Seita, D., Ichnowski, J., Paradis, S., Fer, D., Low, T. and Goldberg, K. Y., “Superhuman Surgical Peg Transfer Using Depth-Sensing and Deep Recurrent Neural Networks,” arXiv Preprint. arXiv:2012.12844 (2020).Google Scholar
Qin, Z., Tai, Y., Xia, C., Peng, J., Huang, X., Chen, Z., Li, Q. and Shi, J., “Towards virtual VATS, face, and construct evaluation for peg transfer training of box, VR, AR, and MR trainer,” J. Healthc. Eng. 2019, 110 (2019).CrossRefGoogle ScholarPubMed
Brown, J. D., O’Brien, C. E., Leung, S. C., Dumon, K. R., Lee, D. I. and Kuchenbecker, K. J., “Using contact forces and robot arm accelerations to automatically rate surgeon skill at peg transfer,” IEEE Trans. Biomed. Eng. 64(9), 22632275 (2016).CrossRefGoogle ScholarPubMed
Aschwanden, C., Burgess, L. and Montgomery, K., “Performance Compared to Experience Level in a Virtual Reality Surgical Skills Trainer,” In: International Conference on Foundations of Augmented Cognition (2007) pp. 394399.Google Scholar
Coad, M. M., Okamura, A. M., Wren, S., Mintz, Y., Lendvay, T. S., Jarc, A. M. and Nisky, I., “Training in Divergent and Convergent Force Fields During 6-DOF Teleoperation with a Robot-Assisted Surgical System,” In: 2017 IEEE World Haptics Conference (WHC) (2017) pp. 195200.Google Scholar
Chen, J., Zhang, D., Munawar, A., Zhu, R., Lo, B., Fischer, G. S. and Yang, G.-Z., “Supervised Semi-Autonomous Control for Surgical Robot Based on Banoian Optimization,” In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2020) pp. 29432949.Google Scholar
Hutchins, A. R., Manson, R. J., Zani, S. and Mann, B. P., “Sample Entropy of Speed Power Spectrum as a Measure of Laparoscopic Surgical Instrument Trajectory Smoothness,” In: 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) (2018) pp. 54105413.Google Scholar
Zhao, Z., Voros, S., Weng, Y., Chang, F. and Li, R., “Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method,” Comput. Assist. Surg. 22(sup1), 2635 (2017).CrossRefGoogle ScholarPubMed
Zhao, Z., Chen, Z., Voros, S. and Cheng, X., “Real-time tracking of surgical instruments based on spatio-temporal context and deep learning,” Comput. Assist. Surg. 24(sup1), 2029 (2019).CrossRefGoogle ScholarPubMed
Fathabadi, F. R., Grantner, J. L., Abdel-Qader, I. and Shebrain, S. A., “Box-trainer assessment system with real-time multi-class detection and tracking of laparoscopic instruments, using CNN,” Acta Polytech. Hungarica 19(2), 727 (2022).CrossRefGoogle Scholar
Zhang, J. and Gao, X., “Object extraction via deep learning-based marker-free tracking framework of surgical instruments for laparoscope-holder robots,” Int. J. Comput. Assist. Radiol. Surg. 15(8), 13351345 (2020).CrossRefGoogle ScholarPubMed
Kunkler, K., “The role of medical simulation: An overview,” Int. J. Med. Robot. Comput. Assist. Surg. 2(3), 203210 (2006).CrossRefGoogle ScholarPubMed
Grantner, J. L., Kurdi, A. H., Al-Gailani, M., Abdel-Qader, I., Sawyer, R. G. and Shebrain, S., “Intelligent Performance Assessment System for Laparoscopic Surgical Box-Trainer,” In: 2018 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) (2018) pp. 17.Google Scholar
Gao, Y., Kruger, U., Intes, X., Schwaitzberg, S. and De, S., “A machine learning approach to predict surgical learning curves,” Surgery 167(2), 321327 (2020).CrossRefGoogle ScholarPubMed
Kuo, R. J., Chen, H.-J. and Kuo, Y.-H., “The development of an eye movement-based deep learning system for laparoscopic surgical skills assessment,” Sci. Rep. 12(1), 112 (2022).CrossRefGoogle ScholarPubMed
Hong, M., Meisner, K., Lee, S., Schreiber, A. M. and Rozenblit, J. W., “A Fuzzy Reasoning System for Computer-Guided Laparoscopy Training,” In: 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2020) pp. 17121717.Google Scholar
Islam, G., Kahol, K., Li, B., Smith, M. and Patel, V. L., “Affordable, web-based surgical skill training and evaluation tool,” J. Biomed. Inform. 59, 102114 (2016).CrossRefGoogle ScholarPubMed
Peng, K. S., Hong, M., Rozenblit, J. and Hamilton, A. J., “Single Shot State Detection in Simulation-Based Laparoscopy Training,” In: 2019 Spring Simulation Conference (SpringSim) (2019) pp. 112.Google Scholar
Jiang, H., Xu, S., State, A., Feng, F., Fuchs, H., Hong, M. and Rozenblit, J., “Enhancing a Laparoscopy Training System with Augmented Reality Visualization,” In: 2019 Spring Simulation Conference (SpringSim) (2019) pp. 112.Google Scholar
Meisner, K., Hong, M. and Rozenblit, J. W., “An Object State Estimation for the Peg Transfer Task in Computer-Guided Surgical Training,” In: 2020 Spring Simulation Conference (SpringSim) (2020) pp. 112.Google Scholar
Harris, C. and Stephens, M., “A combined corner and edge detector,” Alvey Vis. Conf. 15(50), 105244 (1988).Google Scholar
Peng, K. S., Hong, M. and Rozenblit, J., “Image-Based Object State Modeling of a Transfer Task in Simulated Surgical Training,” In: Proceedings of the Symposium on Modeling and Simulation in Medicine (2017) pp. 112.Google Scholar
Pérez-Escamirosa, F., Oropesa, I., Sánchez-González, P., Tapia-Jurado, J., Ruiz-Lizarraga, J. and Minor-Martínez, A., “Orthogonal cameras system for tracking of laparoscopic instruments in training environments,” Cir. Cir. 86(6), 548555 (2019).Google Scholar
Oropesa, I., Sánchez-González, P., Chmarra, M. K., Lamata, P., Fernández, Álvaro, Sánchez-Margallo, J. A., Jansen, F. W., Dankelman, J., Sánchez-Margallo, F. M. and Gómez, E. J., “EVA: Laparoscopic instrument tracking based on endoscopic video analysis for psychomotor skills assessment,” Surg. Endosc. 27(3), 10291039 (2013).CrossRefGoogle ScholarPubMed
Allen, B. F., Kasper, F., Nataneli, G., Dutson, E. P. and Faloutsos, P., “Visual Tracking of Laparoscopic Instruments in Standard Training Environments,” In: MMVR (2011) pp. 1117.Google Scholar
Chauhan, M., Sawhney, R., Da Silva, C. F., Aruparayil, N., Gnanaraj, J., Maiti, S., Mishra, A., Quyn, A., Bolton, W., Burke, J., Jayne, D. and Valdastri, P., “Evaluation and usability study of low-cost laparoscopic box trainer Lap-Pack: A 2-stage multicenter cohort study,” IJS Glob. Health 4(5), e59 (2021).Google Scholar
Bao, S., Pan, J., Chang, X., Wu, D. and Fang, C., “Virtual Surgical Instruments and Surgical Simulation,” In: Biliary Tract Surgery (Springer, Singapore, 2021) pp. 131159.CrossRefGoogle Scholar
Lam, K., Chen, J., Wang, Z., Iqbal, F. M., Darzi, A., Lo, B., Purkayastha, S. and Kinross, J. M., “Machine learning for technical skill assessment in surgery: A systematic review,” NPJ Digit. Med. 5(1), 116 (2022).CrossRefGoogle Scholar
Wang, H. and Zhang, X., “Real-time vehicle detection and tracking using 3D LiDAR,” Asian J. Control 24(3), 14591469 (2022).CrossRefGoogle Scholar
Abbas, H. A., “A new adaptive deep neural network controller based on sparse auto-encoder for the antilock bracking system systems subject to high constraints,” Asian J. Control 23(5), 21452156 (2021).CrossRefGoogle Scholar
Kim, J., “Autonomous rover guidance and localization by measuring the peak of a tall landmark,” Asian J. Control 24(5), 21402152 (2022).CrossRefGoogle Scholar
Fathabadi, F. R., Grantner, J. L., Shebrain, S. A. and Abdel-Qader, I., “Surgical Skill Assessment System Using Fuzzy Logic in a Multi-Class Detection of Laparoscopic Box-Trainer Instruments,” In: 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (2021) pp. 12481253.Google Scholar
Fathabadi, F. R., Grantner, J. L., Shebrain, S. A. and Abdel-Qader, I., “Multi-Class Detection of Laparoscopic Instruments for the Intelligent Box-Trainer System Using Faster R-CNN Architecture,” In: 2021 IEEE 19th World Symposium on Applied Machine Intelligence and Informatics (SAMI) (2021) pp. 149154.Google Scholar
Li, L., Li, X., Ouyang, B., Ding, S., Yang, S. and Qu, Y., “Autonomous multiple instruments tracking for robot-assisted laparoscopic surgery with visual tracking space vector method,” IEEE/ASME Trans. Mechatron. 27(2), 733743 (2021).CrossRefGoogle Scholar
Shvets, A. A., Rakhlin, A., Kalinin, A. A. and Iglovikov, V. I., “Automatic Instrument Segmentation in Robot-Assisted Surgery Using Deep Learning,” In: 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA) (2018) pp. 624628.Google Scholar
Robu, M., Kadkhodamohammadi, A., Luengo, I. and Stoyanov, D., “Towards real-time multiple surgical tool tracking,” Comput. Methods Biomech. Biomed. Eng. Imaging Vis. 9(3), 279285 (2021).CrossRefGoogle Scholar
Fathabadi, F. R., Grantner, J. L., Shebrain, S. A. and Abdel-Qader, I., “Fuzzy logic supervisor-A surgical skills assessment system using multi-class detection of laparoscopic box-trainer instruments,” J. Intell. Fuzzy Syst. 43(4), 116 (2022).Google Scholar
Fathabadi, F. R., Grantner, J. L., Shebrain, S. A. and Abdel-Qader, I., “Surgical Skill Training and Evaluation for a Peg Transfer Task in a Three Camera Based-Laparoscopic Box-Trainer System,” In: 2021 International Conference on Computational Science and Computational Intelligence (CSCI) (2021) pp. 11461151.Google Scholar
Erhan, D., Szegedy, C., Toshev, A. and Anguelov, D., “Scalable Object Detection Using Deep Neural Networks,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014) pp. 21472154.Google Scholar
He, K., Zhang, X., Ren, S. and Sun, J., “Deep Residual Learning for Image Recognition,” In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016) pp. 770778.Google Scholar
Wilms, C., Gerlach, A. M., Schmitz, R. and Frintrop, S., “Segmenting Medical Instruments in Minimally Invasive Surgeries using AttentionMask,” arXiv Preprint. arXiv:2203.11358 (2022).Google Scholar
Takahashi, T., Nozaki, K., Gonda, T., Mameno, T. and Ikebe, K., “Deep learning-based detection of dental prostheses and restorations,” Sci. Rep. 11(1), 17 (2021).Google ScholarPubMed