No Arabic abstract
The Raven I and the Raven II surgical robots, as open research platforms, have been serving the robotic surgery research community for ten years. The paper 1) briefly presents the Raven I and the Raven II robots, 2) reviews the recent publications that are built upon the Raven robots, aim to be applied to the Raven robots, or are directly compared with the Raven robots, and 3) uses the Raven robots as a case study to discuss the popular research problems in the research community and the trend of robotic surgery study. Instead of being a thorough literature review, this work only reviews the works formally published in the past three years and uses these recent publications to analyze the research interests, the popular open research problems, and opportunities in the topic of robotic surgery.
Surgical robots have been introduced to operating rooms over the past few decades due to their high sensitivity, small size, and remote controllability. The cable-driven nature of many surgical robots allows the systems to be dexterous and lightweight, with diameters as low as 5mm. However, due to the slack and stretch of the cables and the backlash of the gears, inevitable uncertainties are brought into the kinematics calculation. Since the reported end effector position of surgical robots like RAVEN-II is directly calculated using the motor encoder measurements and forward kinematics, it may contain relatively large error up to 10mm, whereas semi-autonomous functions being introduced into abdominal surgeries require position inaccuracy of at most 1mm. To resolve the problem, a cost-effective, real-time and data-driven pipeline for robot end effector position precision estimation is proposed and tested on RAVEN-II. Analysis shows an improved end effector position error of around 1mm RMS traversing through the entire robot workspace without high-resolution motion tracker.
Datasets are an essential component for training effective machine learning models. In particular, surgical robotic datasets have been key to many advances in semi-autonomous surgeries, skill assessment, and training. Simulated surgical environments can enhance the data collection process by making it faster, simpler and cheaper than real systems. In addition, combining data from multiple robotic domains can provide rich and diverse training data for transfer learning algorithms. In this paper, we present the DESK (Dexterous Surgical Skill) dataset. It comprises a set of surgical robotic skills collected during a surgical training task using three robotic platforms: the Taurus II robot, Taurus II simulated robot, and the YuMi robot. This dataset was used to test the idea of transferring knowledge across different domains (e.g. from Taurus to YuMi robot) for a surgical gesture classification task with seven gestures. We explored three different scenarios: 1) No transfer, 2) Transfer from simulated Taurus to real Taurus and 3) Transfer from Simulated Taurus to the YuMi robot. We conducted extensive experiments with three supervised learning models and provided baselines in each of these scenarios. Results show that using simulation data during training enhances the performance on the real robot where limited real data is available. In particular, we obtained an accuracy of 55% on the real Taurus data using a model that is trained only on the simulator data. Furthermore, we achieved an accuracy improvement of 34% when 3% of the real data is added into the training process.
Commercially available surgical-robot technology currently addresses many surgical scenarios for adult patients. This same technology cannot be used to the benefit of neonate patients given the considerably smaller workspace. Medically relevant procedures regarding neonate patients include minimally invasive surgery to repair congenital esophagus disorders, which entail the suturing of the fragile esophagus within the narrow neonate cavity. In this work, we explore the use of the SmartArm robotic system in a feasibility study using a neonate chest and esophagus model. We show that a medically inexperienced operator can perform two-throw knots inside the neonate chest model using the robotic system.
Many have explored the application of continuum robot manipulators for minimally invasive surgery, and have successfully demonstrated the advantages their flexible design provides -- with some solutions having reached commercialisation and clinical practice. However, the usual high complexity and closed-nature of such designs has traditionally restricted the shared development of continuum robots across the research area, thus impacting further progress and the solution of open challenges. In order to close this gap, this paper introduces ENDO, an open-source 3-segment continuum robot manipulator with control and actuation mechanism, whose focus is on simplicity, affordability, and accessibility. This robotic system is fabricated from low cost off-the-shelf components and rapid prototyping methods, and its information for implementation (and that of future iterations), including CAD files and source code, is available to the public on the Open Source Medical Robots initiatives repository on GitHub (https://github.com/OpenSourceMedicalRobots), with the control library also available directly from Arduino. Herein, we present details of the robot design and control, validate functionality by experimentally evaluating its workspace, and discuss possible paths for future development.
Autonomous surgical execution relieves tedious routines and surgeons fatigue. Recent learning-based methods, especially reinforcement learning (RL) based methods, achieve promising performance for dexterous manipulation, which usually requires the simulation to collect data efficiently and reduce the hardware cost. The existing learning-based simulation platforms for medical robots suffer from limited scenarios and simplified physical interactions, which degrades the real-world performance of learned policies. In this work, we designed SurRoL, an RL-centered simulation platform for surgical robot learning compatible with the da Vinci Research Kit (dVRK). The designed SurRoL integrates a user-friendly RL library for algorithm development and a real-time physics engine, which is able to support more PSM/ECM scenarios and more realistic physical interactions. Ten learning-based surgical tasks are built in the platform, which are common in the real autonomous surgical execution. We evaluate SurRoL using RL algorithms in simulation, provide in-depth analysis, deploy the trained policies on the real dVRK, and show that our SurRoL achieves better transferability in the real world.