Do you want to publish a course? Click here

SuPer: A Surgical Perception Framework for Endoscopic Tissue Manipulation with Surgical Robotics

322   0   0.0 ( 0 )
 Added by Yang Li
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Traditional control and task automation have been successfully demonstrated in a variety of structured, controlled environments through the use of highly specialized modeled robotic systems in conjunction with multiple sensors. However, the application of autonomy in endoscopic surgery is very challenging, particularly in soft tissue work, due to the lack of high-quality images and the unpredictable, constantly deforming environment. In this work, we propose a novel surgical perception framework, SuPer, for surgical robotic control. This framework continuously collects 3D geometric information that allows for mapping a deformable surgical field while tracking rigid instruments within the field. To achieve this, a model-based tracker is employed to localize the surgical tool with a kinematic prior in conjunction with a model-free tracker to reconstruct the deformable environment and provide an estimated point cloud as a mapping of the environment. The proposed framework was implemented on the da Vinci Surgical System in real-time with an end-effector controller where the target configurations are set and regulated through the framework. Our proposed framework successfully completed soft tissue manipulation tasks with high accuracy. The demonstration of this novel framework is promising for the future of surgical autonomy. In addition, we provide our dataset for further surgical research.



rate research

Read More

Surgical robots have had clinical use since the mid 1990s. Robot-assisted surgeries offer many benefits over the conventional approach including lower risk of infection and blood loss, shorter recovery, and an overall safer procedure for patients. The past few decades have shown many emerging surgical robotic platforms that can work in complex and confined channels of the internal human organs and improve the cognitive and physical skills of the surgeons during the operation. Advanced technologies for sensing, actuation, and intelligent control have enabled multiple surgical devices to simultaneously operate within the human body at low cost and with more efficiency. Despite advances, current surgical intervention systems are not able to execute autonomous tasks and make cognitive decisions that are analogous to that of humans. This paper will overview a historical development of surgery from conventional open to robotic-assisted approaches with discussion on the capabilities of advanced intelligent systems and devices that are currently implemented in existing surgical robotic systems. It will also revisit available autonomous surgical platforms with comments on the essential technologies, existing challenges, and suggestions for the future development of intelligent robotic-assisted surgical systems towards the achievement of fully autonomous operation.
Many have explored the application of continuum robot manipulators for minimally invasive surgery, and have successfully demonstrated the advantages their flexible design provides -- with some solutions having reached commercialisation and clinical practice. However, the usual high complexity and closed-nature of such designs has traditionally restricted the shared development of continuum robots across the research area, thus impacting further progress and the solution of open challenges. In order to close this gap, this paper introduces ENDO, an open-source 3-segment continuum robot manipulator with control and actuation mechanism, whose focus is on simplicity, affordability, and accessibility. This robotic system is fabricated from low cost off-the-shelf components and rapid prototyping methods, and its information for implementation (and that of future iterations), including CAD files and source code, is available to the public on the Open Source Medical Robots initiatives repository on GitHub (https://github.com/OpenSourceMedicalRobots), with the control library also available directly from Arduino. Herein, we present details of the robot design and control, validate functionality by experimentally evaluating its workspace, and discuss possible paths for future development.
Robotic-assisted surgery is now well-established in clinical practice and has become the gold standard clinical treatment option for several clinical indications. The field of robotic-assisted surgery is expected to grow substantially in the next decade with a range of new robotic devices emerging to address unmet clinical needs across different specialities. A vibrant surgical robotics research community is pivotal for conceptualizing such new systems as well as for developing and training the engineers and scientists to translate them into practice. The da Vinci Research Kit (dVRK), an academic and industry collaborative effort to re-purpose decommissioned da Vinci surgical systems (Intuitive Surgical Inc, CA, USA) as a research platform for surgical robotics research, has been a key initiative for addressing a barrier to entry for new research groups in surgical robotics. In this paper, we present an extensive review of the publications that have been facilitated by the dVRK over the past decade. We classify research efforts into different categories and outline some of the major challenges and needs for the robotics community to maintain this initiative and build upon it.
Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers. While this is possible in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this paper, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of 6 main tasks of laparoscopic training. Also, we provide a ML framework to evaluate novel transfer learning methodologies on this database. The collected DESK dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data is a blend of simulated and real robot data that is tested on a real robot. Using simulation data enhances the performance of the real robot where limited or no real data is available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-to-simulated data was 22%-78%. For Taurus II and da Vinci robots, the model showed an accuracy of 97.5% and 93% respectively, training only with simulation data. Results indicate that simulation can be used to augment training data to enhance the performance of models in real scenarios. This shows the potential for future use of surgical data from the operating room in deployable surgical robots in remote areas.
Learning meaningful visual representations in an embedding space can facilitate generalization in downstream tasks such as action segmentation and imitation. In this paper, we learn a motion-centric representation of surgical video demonstrations by grouping them into action segments/sub-goals/options in a semi-supervised manner. We present Motion2Vec, an algorithm that learns a deep embedding feature space from video observations by minimizing a metric learning loss in a Siamese network: images from the same action segment are pulled together while pushed away from randomly sampled images of other segments, while respecting the temporal ordering of the images. The embeddings are iteratively segmented with a recurrent neural network for a given parametrization of the embedding space after pre-training the Siamese network. We only use a small set of labeled video segments to semantically align the embedding space and assign pseudo-labels to the remaining unlabeled data by inference on the learned model parameters. We demonstrate the use of this representation to imitate surgical suturing motions from publicly available videos of the JIGSAWS dataset. Results give 85.5 % segmentation accuracy on average suggesting performance improvement over several state-of-the-art baselines, while kinematic pose imitation gives 0.94 centimeter error in position per observation on the test set. Videos, code and data are available at https://sites.google.com/view/motion2vec
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا