Knit 1, Perl 2
Most hospitals have fellowships available for training in robotic surgery, along with the availability of simulators and continuing education programs that add to the understanding of the procedures by observation of more experienced users. However the learning curve is particular to the skill level of the surgeon and the difficulty of the procedures, and while simulators and visuals are important, they lack haptic feedback and real-life issues that are absolutely essential for successful robotic surgical outcomes. Actual surgical time using said tools is most important to gaining expertise, something simulators have difficulty providing. That said, with over 10 million robotic surgeries having been performed through 2021, there has been a large amount of video and kinematics data recorded during those procedures that can be used for post-operative review and training.
Most surgeons are limited in the amount of time they have available to review video of such procedures, but now that we live in the world of Ai and its ability to build multi-dimensional models from video data, researchers at Johns Hopkins and Stamford have been using this library of robotic procedures to train a robotic surgical system to perform without surgical assistance. The training procedure is called imitation learning, which allows the AI to predict actions from observations of past procedures. This type of learning system is, typically used to train service robots in home settings, however surgical procedures require more precise movements on deformable objects (skin, organs, blood vessels, etc.) at times under poor lighting, and while in theory, the videos should provide absolute mechanical information about every movement, there is a big difference between the necessary accuracy and physical mechanics of an industrial robotic arm and a surgical one.
Before AI, the idea of a surgical robot performing an autonomous procedure involved the laborious task of breaking down every movement of the procedure into 3-dimensional mechanical data (x,y,z, force, movement speed, etc.), particular to that specific procedure and was limited to very simple tasks, but it was difficult to adapt that data to what might be called normal variances. Using AI and machine learning and the AI’s ability to transform the library of video data into training data, in a way similar to how large language models transform text and images into referential data that is used to predict outcomes, the researchers say they have trained a robot to perform complex surgical tasks at the same level as human surgeons, just by watching the robotic surgeries performed by other doctors.
Here is the video of the autonomous surgical robot using the video data for refence:
https://youtu.be/c1E170Xr6BM
[1] Straitsresearch.com – Robotic Surgery Market Size & Trends
[2] Sheetz KH, Claflin J, Dimick JB. Trends in the adoption of robotic surgery for common surgical procedures. JAMA Netw Open. 2020;3(1):e1918911