|
|
|
Application of artificial intelligence in surgery |
Xiao-Yun Zhou1( ), Yao Guo1, Mali Shen1, Guang-Zhong Yang2 |
1. The Hamlyn Centre for Robotic Surgery, Imperial College London, London SW7 2AZ, UK 2. Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai 200240, China |
|
|
|
|
Abstract Artificial intelligence (AI) is gradually changing the practice of surgery with technological advancements in imaging, navigation, and robotic intervention. In this article, we review the recent successful and influential applications of AI in surgery from preoperative planning and intraoperative guidance to its integration into surgical robots. We conclude this review by summarizing the current state, emerging trends, and major challenges in the future development of AI in surgery.
|
| Keywords
artificial intelligence
surgical autonomy
medical robotics
deep learning
|
|
Corresponding Author(s):
Xiao-Yun Zhou
|
|
Just Accepted Date: 11 June 2020
Online First Date: 24 July 2020
Issue Date: 26 August 2020
|
|
| 1 |
V Vitiello, SL Lee, TP Cundy, GZ Yang. Emerging robotic platforms for minimally invasive surgery. IEEE Rev Biomed Eng 2013; 6: 111–126
https://doi.org/10.1109/RBME.2012.2236311
pmid: 23288354
|
| 2 |
J Troccaz, G Dagnino, GZ Yang. Frontiers of medical robotics: from concept to systems to clinical translation. Annu Rev Biomed Eng 2019; 21(1): 193–218
https://doi.org/10.1146/annurev-bioeng-060418-052502
pmid: 30822100
|
| 3 |
GZ Yang. Body Sensor Networks. New York: Springer, 2014
|
| 4 |
GZ Yang. Implantable Sensors and Systems: from Theory to Practice. New York: Springer, 2018
|
| 5 |
E Shortliffe. Computer-Based Medical Consultations: MYCIN. Amsterdam: Elsevier, 2012. Vol. 2
|
| 6 |
A Krizhevsky, I Sutskever, GE Hinton. Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS). Lake Tahoe. 2012: 1097–1105
|
| 7 |
G Litjens, T Kooi, BE Bejnordi, AAA Setio, F Ciompi, M Ghafoorian, JAWM van der Laak, B van Ginneken, CI Sánchez. A survey on deep learning in medical image analysis. Med Image Anal 2017; 42: 60–88
https://doi.org/10.1016/j.media.2017.07.005
pmid: 28778026
|
| 8 |
P Khosravi, E Kazemi, M Imielinski, O Elemento, I Hajirasouliha. Deep convolutional neural networks enable discrimination of heterogeneous digital pathology images. EBioMedicine 2018; 27: 317–328
https://doi.org/10.1016/j.ebiom.2017.12.026
pmid: 29292031
|
| 9 |
S Chilamkurthy, R Ghosh, S Tanamala, M Biviji, NG Campeau, VK Venugopal, V Mahajan, P Rao, P Warier. Deep learning algorithms for detection of critical findings in head CT scans: a retrospective study. Lancet 2018; 392(10162): 2388–2396
https://doi.org/10.1016/S0140-6736(18)31645-3
pmid: 30318264
|
| 10 |
A Meyer, D Zverinski, B Pfahringer, J Kempfert, T Kuehne, SH Sündermann, C Stamm, T Hofmann, V Falk, C Eickhoff. Machine learning for real-time prediction of complications in critical care: a retrospective study. Lancet Respir Med 2018; 6(12): 905–914
https://doi.org/10.1016/S2213-2600(18)30300-X
pmid: 30274956
|
| 11 |
X Li, S Zhang, Q Zhang, X Wei, Y Pan, J Zhao, X Xin, C Qin, X Wang, J Li, F Yang, Y Zhao, M Yang, Q Wang, Z Zheng, X Zheng, X Yang, CT Whitlow, MN Gurcan, L Zhang, X Wang, BC Pasche, M Gao, W Zhang, K Chen. Diagnosis of thyroid cancer using deep convolutional neural network models applied to sonographic images: a retrospective, multicohort, diagnostic study. Lancet Oncol 2019; 20(2): 193–201
https://doi.org/10.1016/S1470-2045(18)30762-9
pmid: 30583848
|
| 12 |
E Rubinstein, M Salhov, M Nidam-Leshem, V White, S Golan, J Baniel, H Bernstine, D Groshar, A Averbuch. Unsupervised tumor detection in dynamic PET/CT imaging of the prostate. Med Image Anal 2019; 55: 27–40
https://doi.org/10.1016/j.media.2019.04.001
pmid: 31005029
|
| 13 |
M Winkels, TS Cohen. Pulmonary nodule detection in CT scans with equivariant CNNs. Med Image Anal 2019; 55: 15–26
https://doi.org/10.1016/j.media.2019.03.010
pmid: 31003034
|
| 14 |
G Maicas, G Carneiro, AP Bradley, JC Nascimento, I Reid. Deep reinforcement learning for active breast lesion detection from DCE-MRI. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2017: 665–673
|
| 15 |
H Lee, S Yune, M Mansouri, M Kim, SH Tajmir, CE Guerrier, SA Ebert, SR Pomerantz, JM Romero, S Kamalian, RG Gonzalez, MH Lev, S Do. An explainable deep-learning algorithm for the detection of acute intracranial haemorrhage from small datasets. Nat Biomed Eng 2019; 3(3): 173–182
https://doi.org/10.1038/s41551-018-0324-9
pmid: 30948806
|
| 16 |
K Kamnitsas, C Ledig, VFJ Newcombe, JP Simpson, AD Kane, DK Menon, D Rueckert, B Glocker. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 2017; 36: 61–78
https://doi.org/10.1016/j.media.2016.10.004
pmid: 27865153
|
| 17 |
J Long, E Shelhamer, T Darrell. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. Boston. 2015: 3431–3440
|
| 18 |
O Ronneberger, P Fischer, T Brox. U-Net: convolutional networks for biomedical image segmentation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2015: 234–241
|
| 19 |
O Cicek, A Abdulkadir, SS Lienkamp, T Brox, O Ronneberger. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2016: 424–432
|
| 20 |
XY Zhou, GZ Yang. Normalization in training U-Net for 2D biomedical semantic segmentation. IEEE Robot Autom Lett 2019; 4(2): 1792–1799
https://doi.org/10.1109/LRA.2019.2896518
|
| 21 |
E Gibson, F Giganti, Y Hu, E Bonmati, S Bandula, K Gurusamy, B Davidson, SP Pereira, MJ Clarkson, DC Barratt. Automatic multi-organ segmentation on abdominal CT with dense V-networks. IEEE Trans Med Imaging 2018; 37(8): 1822–1834
https://doi.org/10.1109/TMI.2018.2806309
pmid: 29994628
|
| 22 |
G Wang, W Li, MA Zuluaga, R Pratt, PA Patel, M Aertsen, T Doel, AL David, J Deprest, S Ourselin, T Vercauteren. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 2018; 37(7): 1562–1573
https://doi.org/10.1109/TMI.2018.2791721
pmid: 29969407
|
| 23 |
I Laina, N Rieke, C Rupprecht, JP Vizca’ıno, A Eslami, F Tombari, N Navab. Concurrent segmentation and localization for tracking of surgical instruments. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2017: 664–672
|
| 24 |
X Feng, J Yang, AF Laine, ED Angelini. Discriminative localization in CNNs for weakly-supervised segmentation of pulmonary nodules. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2017: 568–576
|
| 25 |
W Bai, C Chen, G Tarroni, J Duan, F Guitton, SE Petersen, Y Guo, PM Matthews, D Rueckert. Self-supervised learning for cardiac MR image segmentation by anatomical position prediction. In: International Conference on Medical Image Computing and Computer Assisted Intervention. New York: Springer, 2019: 541–549
|
| 26 |
G Balakrishnan, A Zhao, MR Sabuncu, J Guttag, AV Dalca. VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans Med Imaging 2019: 38(8): 1788–1800
pmid: 30716034
|
| 27 |
Z Shen, X Han, Z Xu, M Niethammer. Networks for joint affine and non-parametric image registration. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach. 2019: 4224–4233
|
| 28 |
Y Hu, M Modat, E Gibson, W Li, N Ghavami, E Bonmati, G Wang, S Bandula, CM Moore, M Emberton, S Ourselin, JA Noble, DC Barratt, T Vercauteren. Weakly-supervised convolutional neural networks for multimodal image registration. Med Image Anal 2018; 49: 1–13
https://doi.org/10.1016/j.media.2018.07.002
pmid: 30007253
|
| 29 |
S Miao, S Piat, P Fischer, A Tuysuzoglu, P Mewes, T Mansi, R Liao. Dilated FCN for multi-agent 2D/3D medical image registration. In: Proceedings of AAAI Conference on Artificial Intelligence. New Orleans. 2018
|
| 30 |
H Sokooti, B de Vos, F Berendsen, BP Lelieveldt, I Iˇsgum, M Staring. Nonrigid image registration using multi-scale 3D convolutional neural networks. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2017: 232–239
|
| 31 |
R Liao, S Miao, P de Tournemire, S Grbic, A Kamen, T Mansi, D Comaniciu. An artificial agent for robust image registration. In: Proceedings of AAAI Conference on Artificial Intelligence. San Francisco. 2017
|
| 32 |
D Cool, D Downey, J Izawa, J Chin, A Fenster. 3D prostate model formation from non-parallel 2D ultrasound biopsy images. Med Image Anal 2006; 10(6): 875–887
https://doi.org/10.1016/j.media.2006.09.001
pmid: 17097333
|
| 33 |
X Zhou, G Yang, C Riga, S Lee. Stent graft shape instantiation for fenestrated endovascular aortic repair. In: The Hamlyn Symposium on Medical Robotics. London. 2017
|
| 34 |
XY Zhou, J Lin, C Riga, GZ Yang, SL Lee. Real-time 3D shape instantiation from single fluoroscopy projection for fenestrated stent graft deployment. IEEE Robot Autom Lett 2018; 3(2): 1314–1321
https://doi.org/10.1109/LRA.2018.2798286
|
| 35 |
JQ Zheng, XY Zhou, C Riga, GZ Yang. Real-time 3D shape instantiation for partially deployed stent segments from a single 2D fluoroscopic image in fenestrated endovascular aortic repair. IEEE Robot Autom Lett 2019; 4(4): 3703–3710
https://doi.org/10.1109/LRA.2019.2928213
|
| 36 |
XY Zhou, C Riga, SL Lee, GZ Yang. Towards automatic 3D shape instantiation for deployed stent grafts: 2D multiple-class and class-imbalance marker segmentation with equally-weighted focal U-Net. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid: IEEE, 2018: 1261–1267
|
| 37 |
JQ Zheng, XY Zhou, C Riga, GZ Yang. Towards 3D path planning from a single 2D fluoroscopic image for robot assisted fenestrated endovascular aortic repair. In: 2019 International Conference on Robotics and Automation (ICRA). Montreal: IEEE, 2019: 8747–8753
|
| 38 |
SL Lee, A Chung, M Lerotic, MA Hawkins, D Tait, GZ Yang. Dynamic shape instantiation for intra-operative guidance. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2010: 69–76
|
| 39 |
XY Zhou, GZ Yang, SL Lee. A real-time and registration-free framework for dynamic shape instantiation. Med Image Anal 2018; 44: 86–97
https://doi.org/10.1016/j.media.2017.11.009
pmid: 29197705
|
| 40 |
XY Zhou, ZY Wang, P Li, JQ Zheng, GZ Yang. One stage shape instantiation from a single 2D image to 3D point cloud. In: International Conference on Medical Image Computing and Computer Assisted Intervention. New York: Springer, 2019: 30–38
|
| 41 |
F Mahmood, NJ Durr. Deep learning and conditional random fields-based depth estimation and topographical reconstruction from conventional endoscopy. Med Image Anal 2018; 48: 230–243
https://doi.org/10.1016/j.media.2018.06.005
pmid: 29990688
|
| 42 |
F Mahmood, R Chen, NJ Durr. Unsupervised reverse domain adaptation for synthetic medical images via adversarial training. IEEE Trans Med Imaging 2018; 37(12): 2572–2581
https://doi.org/10.1109/TMI.2018.2842767
pmid: 29993538
|
| 43 |
M Turan, EP Ornek, N Ibrahimli, C Giracoglu, Y Almalioglu, MF Yanik, M Sitti. Unsupervised odometry and depth learning for endoscopic capsule robots. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid: IEEE, 2018: 1801–1807
|
| 44 |
M Shen, Y Gu, N Liu, GZ Yang. Context-aware depth and pose estimation for bronchoscopic navigation. IEEE Robot Autom Lett 2019; 4(2): 732–739
https://doi.org/10.1109/LRA.2019.2893419
|
| 45 |
T Zhou, M Brown, N Snavely, DG Lowe. Unsupervised learning of depth and ego-motion from video. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Hawaii. 2017
|
| 46 |
H Zhan, R Garg, C Saroj Weerasekera, K Li, H Agarwal, I Reid. Unsupervised learning of monocular depth estimation and visual odometry with deep feature reconstruction. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Salt Lake City. 2018
|
| 47 |
M Ye, E Johns, A Handa, L Zhang, P Pratt, GZ Yang. Selfsupervised siamese learning on stereo image pairs for depth estimation in robotic surgery. In: The Hamlyn Symposium on Medical Robotics. London. 2017: 27
|
| 48 |
JY Zhu, T Park, P Isola, AA Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV). Venice. 2017: 2223–2232
|
| 49 |
M Turan, Y Almalioglu, H Araujo, E Konukoglu, M Sitti. Deep endovo: a recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 2018; 275: 1861–1870
https://doi.org/10.1016/j.neucom.2017.10.014
|
| 50 |
J Sganga, D Eng, C Graetzel, D Camarillo. Offsetnet: deep learning for localization in the lung using rendered images. In: 2019 International Conference on Robotics and Automation (ICRA). Montreal: IEEE, 2019: 5046–5052
|
| 51 |
P Mountney, D Stoyanov, A Davison, GZ Yang. Simultaneous stereoscope localization and soft-tissue mapping for minimal invasive surgery. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2006: 347–354
|
| 52 |
AJ Davison, ID Reid, ND Molton, O Stasse. MonoSLAM: real-time single camera SLAM. IEEE Trans Pattern Anal Mach Intell 2007; 29(6): 1052–1067
https://doi.org/10.1109/TPAMI.2007.1049
pmid: 17431302
|
| 53 |
P Mountney, GZ Yang. Motion compensated SLAM for image guided surgery. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2010: 496–504
|
| 54 |
OG Grasa, E Bernal, S Casado, I Gil, JM Montiel. Visual SLAM for handheld monocular endoscope. IEEE Trans Med Imaging 2014; 33(1): 135–146
https://doi.org/10.1109/TMI.2013.2282997
pmid: 24107925
|
| 55 |
M Turan, Y Almalioglu, H Araujo, E Konukoglu, M Sitti. A non-rigid map fusion-based direct SLAM method for endoscopic capsule robots. Int J Intell Robot Appl 2017; 1(4): 399–409
https://doi.org/10.1007/s41315-017-0036-4
pmid: 29250588
|
| 56 |
J Song, J Wang, L Zhao, S Huang, G Dissanayake. MISSLAM: real-time large-scale dense deformable SLAM system in minimal invasive surgery based on heterogeneous computing. IEEE Robot Autom Lett 2018; 3(4): 4068–4075
https://doi.org/10.1109/LRA.2018.2856519
|
| 57 |
XY Zhou, S Ernst, SL Lee. Path planning for robot-enhanced cardiac radiofrequency catheter ablation. In: 2016 IEEE international conference on robotics and automation (ICRA). Stockholm: IEEE, 2016: 4172–4177
|
| 58 |
C Shi, S Giannarou, SL Lee, GZ Yang. Simultaneous catheter and environment modeling for trans-catheter aortic valve implantation. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Chicago: IEEE, 2014: 2024–2029
|
| 59 |
L Zhao, S Giannarou, SL Lee, GZ Yang. SCEM+: real-time robust simultaneous catheter and environment modeling for endovascular navigation. IEEE Robot Autom Lett 2016; 1(2): 961–968
https://doi.org/10.1109/LRA.2016.2524984
|
| 60 |
L Zhao, S Giannarou, SL Lee, GZ Yang. Registration-free simultaneous catheter and environment modelling. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2016: 525–533
|
| 61 |
P Mountney, GZ Yang. Soft tissue tracking for minimally invasive surgery: learning local deformation online. In: Proceedings of International Conference on Medical Image Computing and ComputerAssisted Intervention (MICCAI). New York: Springer, 2008: 364–372
|
| 62 |
M Ye, S Giannarou, A Meining, GZ Yang. Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations. Med Image Anal 2016; 30: 144–157
https://doi.org/10.1016/j.media.2015.10.003
pmid: 26970592
|
| 63 |
R Wang, M Zhang, X Meng, Z Geng, FY Wang. 3D tracking for augmented reality using combined region and dense cues in endoscopic surgery. IEEE J Biomed Health Inform 2018; 22(5): 1540–1551
https://doi.org/10.1109/JBHI.2017.2770214
pmid: 29990163
|
| 64 |
S Bernhardt, SA Nicolau, L Soler, C Doignon. The status of augmented reality in laparoscopic surgery as of 2016. Med Image Anal 2017; 37: 66–90
https://doi.org/10.1016/j.media.2017.01.007
pmid: 28160692
|
| 65 |
J Wang, H Suenaga, K Hoshi, L Yang, E Kobayashi, I Sakuma, H Liao. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery. IEEE Trans Biomed Eng 2014; 61(4): 1295–1304
https://doi.org/10.1109/TBME.2014.2301191
pmid: 24658253
|
| 66 |
P Pratt, M Ives, G Lawton, J Simmons, N Radev, L Spyropoulou, D Amiras. Through the HoloLensTM looking glass: augmented reality for extremity reconstruction surgery using 3D vascular models with perforating vessels. Eur Radiol Exp 2018; 2(1): 2
pmid: 29708204
|
| 67 |
X Zhang, J Wang, T Wang, X Ji, Y Shen, Z Sun, X Zhang. A markerless automatic deformable registration framework for augmented reality navigation of laparoscopy partial nephrectomy. Int J CARS 2019; 14(8): 1285–1294
https://doi.org/10.1007/s11548-019-01974-6
pmid: 31016562
|
| 68 |
EJ Topol. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25(1): 44–56
https://doi.org/10.1038/s41591-018-0300-7
pmid: 30617339
|
| 69 |
R Mirnezami, A Ahmed. Surgery 3.0, artificial intelligence and the next-generation surgeon. Br J Surg 2018; 105(5): 463–465
https://doi.org/10.1002/bjs.10860
pmid: 29603133
|
| 70 |
D Bouget, R Benenson, M Omran, L Riffaud, B Schiele, P Jannin. Detecting surgical tools by modelling local appearance and global shape. IEEE Trans Med Imaging 2015; 34(12): 2603–2617
https://doi.org/10.1109/TMI.2015.2450831
pmid: 26625340
|
| 71 |
AA Shvets, A Rakhlin, AA Kalinin, VI Iglovikov. Automatic instrument segmentation in robot-assisted surgery using deep learning. In: Proceedings of IEEE International Conference on Machine Learning and Applications (ICMLA). Stockholm: IEEE, 2018: 624–628
|
| 72 |
M Islam, DA Atputharuban, R Ramesh, H Ren. Real-time instrument segmentation in robotic surgery using auxiliary supervised deep adversarial learning. IEEE Robot Autom Lett 2019; 4(2): 2188–2195
https://doi.org/10.1109/LRA.2019.2900854
|
| 73 |
R Sznitman, R Richa, RH Taylor, B Jedynak, GD Hager. Unified detection and tracking of instruments during retinal microsurgery. IEEE Trans Pattern Anal Mach Intell 2013; 35(5): 1263–1273
https://doi.org/10.1109/TPAMI.2012.209
pmid: 23520263
|
| 74 |
L Zhang, M Ye, PL Chan, GZ Yang. Real-time surgical tool tracking and pose estimation using a hybrid cylindrical marker. Int J CARS 2017; 12(6): 921–930
https://doi.org/10.1007/s11548-017-1558-9
pmid: 28342105
|
| 75 |
M Ye, L Zhang, S Giannarou, GZ Yang. Real-time 3D tracking of articulated tools for robotic surgery. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2016: 386–394
|
| 76 |
Z Zhao, S Voros, Y Weng, F Chang, R Li. Tracking-by-detection of surgical instruments in minimally invasive surgery via the convolutional neural network deep learning-based method. Comput Assist Surg (Abingdon) 2017; 22(sup1): 26–35
https://doi.org/10.1080/24699322.2017.1378777
|
| 77 |
CI Nwoye, D Mutter, J Marescaux, N Padoy. Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. Int J CARS 2019; 14(6): 1059–1067
https://doi.org/10.1007/s11548-019-01958-6
pmid: 30968356
|
| 78 |
D Sarikaya, JJ Corso, KA Guru. Detection and localization of robotic tools in robot-assisted surgery videos using deep neural networks for region proposal and detection. IEEE Trans Med Imaging 2017; 36(7): 1542–1549
https://doi.org/10.1109/TMI.2017.2665671
pmid: 28186883
|
| 79 |
T Kurmann, PM Neila, X Du, P Fua, D Stoyanov, S Wolf, R Sznitman. Simultaneous recognition and pose estimation of instruments in minimally invasive surgery. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2017: 505–513
|
| 80 |
N Padoy, GD Hager. 3D thread tracking for robotic assistance in tele-surgery. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). San Francisco: IEEE, 2011: 2102–2107
|
| 81 |
Y Hu, Y Gu, J Yang, GZ Yang. Multi-stage suture detection for robot assisted anastomosis based on deep learning. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Brisbane: IEEE, 2018: 1–8
|
| 82 |
Y Gu, Y Hu, L Zhang, J Yang, GZ Yang. Cross-scene suture thread parsing for robot assisted anastomosis based on joint feature learning. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid: IEEE, 2018: 769–776
|
| 83 |
AI Aviles, SM Alsaleh, JK Hahn, A Casals. Towards retrieving force feedback in robotic-assisted surgery: a supervised neuro-recurrent-vision approach. IEEE Trans Haptics 2017; 10(3): 431–443
https://doi.org/10.1109/TOH.2016.2640289
pmid: 28113330
|
| 84 |
A Marban, V Srinivasan, W Samek, J Ferna’ndez, A Casals. Estimation of interaction forces in robotic surgery using a semisupervised deep neural network model. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid: IEEE, 2018: 761–768
|
| 85 |
N Ahmidi, L Tao, S Sefati, Y Gao, C Lea, BB Haro, L Zappella, S Khudanpur, R Vidal, GD Hager. A dataset and benchmarks for segmentation and recognition of gestures in robotic surgery. IEEE Trans Biomed Eng 2017; 64(9): 2025–2041
https://doi.org/10.1109/TBME.2016.2647680
pmid: 28060703
|
| 86 |
MJ Fard, S Ameri, RB Chinnam, RD Ellis. Soft boundary approach for unsupervised gesture segmentation in robotic-assisted surgery. IEEE Robot Autom Lett 2017; 2(1): 171–178
https://doi.org/10.1109/LRA.2016.2585303
|
| 87 |
S Krishnan, A Garg, S Patil, C Lea, G Hager, P Abbeel, K Goldberg. Transition state clustering: unsupervised surgical trajectory segmentation for robot learning. Int J Robot Res 2017; 36(13–14): 1595–1618
https://doi.org/10.1177/0278364917743319
|
| 88 |
A Murali, A Garg, S Krishnan, FT Pokorny, P Abbeel, T Darrell, K Goldberg. TSC-DL: unsupervised trajectory segmentation of multi-modal surgical demonstrations with deep learning. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Stockholm: IEEE, 2016: 4150–4157
|
| 89 |
L Zappella, B Béjar, G Hager, R Vidal. Surgical gesture classification from video and kinematic data. Med Image Anal 2013; 17(7): 732–745
https://doi.org/10.1016/j.media.2013.04.007
pmid: 23706754
|
| 90 |
L Tao, L Zappella, GD Hager, R Vidal. Surgical gesture segmentation and recognition. In: Proceedings o International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2013: 339–346
|
| 91 |
F Despinoy, D Bouget, G Forestier, C Penet, N Zemiti, P Poignet, P Jannin. Unsupervised trajectory segmentation for surgical gesture recognition in robotic training. IEEE Trans Biomed Eng 2016; 63(6): 1280–1291
https://doi.org/10.1109/TBME.2015.2493100
pmid: 26513773
|
| 92 |
R DiPietro, N Ahmidi, A Malpani, M Waldram, GI Lee, MR Lee, SS Vedula, GD Hager. Segmenting and classifying activities in robot-assisted surgery with recurrent neural networks. Int J CARS 2019; 14(11): 2005–2020
https://doi.org/10.1007/s11548-019-01953-x
pmid: 31037493
|
| 93 |
D Liu, T Jiang. Deep reinforcement learning for surgical gesture segmentation and classification. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2018: 247–255
|
| 94 |
N Padoy, GD Hager. Human-machine collaborative surgery using learned models. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Shanghai: IEEE, 2011: 5285–5292
|
| 95 |
S Calinon, D Bruno, MS Malekzadeh, T Nanayakkara, DG Caldwell. Human-robot skills transfer interfaces for a flexible surgical robot. Comput Methods Programs Biomed 2014; 116(2): 81–96
https://doi.org/10.1016/j.cmpb.2013.12.015
pmid: 24491285
|
| 96 |
T Osa, N Sugita, M Mitsuishi. Online trajectory planning in dynamic environments for surgical task automation. In: Robotics: Science and Systems. Berkeley. 2014: 1–9
|
| 97 |
J Van Den Berg, S Miller, D Duckworth, H Hu, A Wan, XY Fu, K Goldberg, P Abbeel. Superhuman performance of surgical tasks by robots using iterative learning from human-guided demonstrations. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Alaska: IEEE, 2010: 2074–2081
|
| 98 |
A Murali, S Sen, B Kehoe, A Garg, S McFarland, S Patil, WD Boyd, S Lim, P Abbeel, K Goldberg. Learning by observation for surgical subtasks: multilateral cutting of 3D viscoelastic and 2D orthotropic tissue phantoms. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Seattle: IEEE, 2015: 1202–1209
|
| 99 |
H Mayer, F Gomez, D Wierstra, I Nagy, A Knoll, J Schmidhuber. A system for robotic heart surgery that learns to tie knots using recurrent neural networks. Adv Robot 2008; 22(13–14): 1521–1537
https://doi.org/10.1163/156855308X360604
|
| 100 |
E De Momi, L Kranendonk, M Valenti, N Enayati, G Ferrigno. A neural network-based approach for trajectory planning in robot–human handover tasks. Front Robot AI 2016; 3: 34
https://doi.org/10.3389/frobt.2016.00034
|
| 101 |
J Kober, JA Bagnell, J Peters. Reinforcement learning in robotics: a survey. Int J Robot Res 2013; 32(11): 1238–1274
https://doi.org/10.1177/0278364913495721
|
| 102 |
P Abbeel, AY Ng. Apprenticeship learning via inverse reinforcement learning. In: Proceedings of International Conference on Machine Learning (ICML). Beijing: ACM, 2004: 1
|
| 103 |
X Tan, CB Chng, Y Su, KB Lim, CK Chui. Robotassisted training in laparoscopy using deep reinforcement learning. IEEE Robot Autom Lett 2019; 4(2): 485–492
https://doi.org/10.1109/LRA.2019.2891311
|
| 104 |
J Ho, S Ermon. Generative adversarial imitation learning. In: Proceedings of Advances in Neural Information Processing Systems (NIPS). Barcelona. 2016: 4565–4573
|
| 105 |
S Levine, C Finn, T Darrell, P Abbeel. End-to-end training of deep visuomotor policies. J Mach Learn Res 2016; 17(1): 1334–1373
|
| 106 |
B Thananjeyan, A Garg, S Krishnan, C Chen, L Miller, K Goldberg. Multilateral surgical pattern cutting in 2D orthotropic gauze with deep reinforcement learning policies for tensioning. In: Proceedings of IEEE International Conference on Robotics and Automation (ICRA). Singapore: IEEE, 2017: 2371–2378
|
| 107 |
GZ Yang, L Dempere-Marco, XP Hu, A Rowe. Visual search: psychophysical models and practical applications. Image Vis Comput 2002; 20(4): 291–305
https://doi.org/10.1016/S0262-8856(02)00022-7
|
| 108 |
GZ Yang, GP Mylonas, KW Kwok, A Chung. Perceptual docking for robotic control. In: International Workshop on Medical Imaging and Virtual Reality. New York: Springer, 2008: 21–30
|
| 109 |
M Visentini-Scarzanella, GP Mylonas, D Stoyanov, GZ Yang. I-brush: a gaze-contingent virtual paintbrush for dense 3D reconstruction in robotic assisted surgery. In: Proceedings of International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI). New York: Springer, 2009: 353–360
|
| 110 |
K Fujii, G Gras, A Salerno, GZ Yang. Gaze gesture based human robot interaction for laparoscopic surgery. Med Image Anal 2018; 44: 196–214
https://doi.org/10.1016/j.media.2017.11.011
pmid: 29277075
|
| 111 |
A Nishikawa, T Hosoi, K Koara, D Negoro, A Hikita, S Asano, H Kakutani, F Miyazaki, M Sekimoto, M Yasui, Y Miyake, S Takiguchi, M Monden. Face mouse: a novel human-machine interface for controlling the position of a laparoscope. IEEE Trans Robot Autom 2003; 19(5): 825–841
https://doi.org/10.1109/TRA.2003.817093
|
| 112 |
N Hong, M Kim, C Lee, S Kim. Head-mounted interface for intuitive vision control and continuous surgical operation in a surgical robot system. Med Biol Eng Comput 2019; 57(3): 601–614
https://doi.org/10.1007/s11517-018-1902-4
pmid: 30280331
|
| 113 |
A Graves. Ar Mohamed, G Hinton. Speech recognition with deep recurrent neural networks. In: Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing. Vancouver: IEEE, 2013: 6645–6649
|
| 114 |
K Zinchenko, CY Wu, KT Song. A study on speech recognition control for a surgical robot. IEEE Trans Industr Inform 2017; 13(2): 607–615
https://doi.org/10.1109/TII.2016.2625818
|
| 115 |
MG Jacob, YT Li, GA Akingba, JP Wachs. Collaboration with a robotic scrub nurse. Commun ACM 2013; 56(5): 68–75
https://doi.org/10.1145/2447976.2447993
|
| 116 |
R Wen, WL Tay, BP Nguyen, CB Chng, CK Chui. Hand gesture guided robot-assisted surgery based on a direct augmented reality interface. Comput Methods Programs Biomed 2014; 116(2): 68–80
https://doi.org/10.1016/j.cmpb.2013.12.018
pmid: 24438993
|
| 117 |
OK Oyedotun, A Khashman. Deep learning in vision-based static hand gesture recognition. Neural Comput Appl 2017; 28(12): 3941–3951
https://doi.org/10.1007/s00521-016-2294-8
|
| 118 |
Y Hu, L Zhang, W Li, GZ Yang. Robotic sewing and knot tying for personalized stent graft manufacturing. In: Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Madrid: IEEE, 2018: 754–760
|
| 119 |
Y Hu, W Li, L Zhang, GZ Yang. Designing, prototyping, and testing a flexible suturing robot for transanal endoscopic microsurgery. IEEE Robot Autom Lett 2019; 4(2): 1669–1675
https://doi.org/10.1109/LRA.2019.2896883
|
| 120 |
GZ Yang, J Bellingham, PE Dupont, P Fischer, L Floridi, R Full, N Jacobstein, V Kumar, M McNutt, R Merrifield, BJ Nelson, B Scassellati, M Taddeo, R Taylor, M Veloso, ZL Wang, R Wood. The grand challenges of science robotics. Sc Robot 2018; 3(14): eaar7650
https://doi.org/DOI: 10.1126/scirobotics.aar7650
|
| 121 |
GZ Yang, J Cambias, K Cleary, E Daimler, J Drake, PE Dupont, N Hata, P Kazanzides, S Martel, RV Patel, VJ Santos, RH Taylor. Medical roboticsregulatory, ethical, and legal considerations for increasing levels of autonomy. Sci Robot 2017; 2(4): 8638
https://doi.org/10.1126/scirobotics.aam8638
|
|
Viewed |
|
|
|
Full text
|
|
|
|
|
Abstract
|
|
|
|
|
Cited |
|
|
|
|
| |
Shared |
|
|
|
|
| |
Discussed |
|
|
|
|