ترغب بنشر مسار تعليمي؟ اضغط هنا

Breaking Barriers in Robotic Soft Tissue Surgery: Conditional Autonomous Intestinal Anastomosis

137   0   0.0 ( 0 )
 نشر من قبل Hamed Saeidi
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Autonomous robotic surgery has the potential to provide efficacy, safety, and consistency independent of individual surgeons skill and experience. Autonomous soft-tissue surgery in unstructured and deformable environments is especially challenging as it necessitates intricate imaging, tissue tracking and surgical planning techniques, as well as a precise execution via highly adaptable control strategies. In the laparoscopic setting, soft-tissue surgery is even more challenging due to the need for high maneuverability and repeatability under motion and vision constraints. We demonstrate the first robotic laparoscopic soft tissue surgery with a level of autonomy of 3 out of 5, which allows the operator to select among autonomously generated surgical plans while the robot executes a wide range of tasks independently. We also demonstrate the first in vivo autonomous robotic laparoscopic surgery via intestinal anastomosis on porcine models. We compared the criteria including needle placement corrections, suture spacing, suture bite size, completion time, lumen patency, and leak pressure between the developed system, manual laparoscopic surgery, and robot-assisted surgery (RAS). The ex vivo results indicate that our system outperforms expert surgeons and RAS techniques in terms of consistency and accuracy, and it leads to a remarkable anastomosis quality in living pigs. These results demonstrate that surgical robots exhibiting high levels of autonomy have the potential to improve consistency, patient outcomes, and access to a standard surgical technique.



قيم البحث

اقرأ أيضاً

Deep Reinforcement Learning (DRL) is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeons cognitive workload, increa sed precision in critical aspects of the surgery, and fewer patient-related complications. However, current DRL methods do not guarantee any safety criteria as they maximise cumulative rewards without considering the risks associated with the actions performed. Due to this limitation, the application of DRL in the safety-critical paradigm of robot-assisted Minimally Invasive Surgery (MIS) has been constrained. In this work, we introduce a Safe-DRL framework that incorporates safety constraints for the automation of surgical subtasks via DRL training. We validate our approach in a virtual scene that replicates a tissue retraction task commonly occurring in multiple phases of an MIS. Furthermore, to evaluate the safe behaviour of the robotic arms, we formulate a formal verification tool for DRL methods that provides the probability of unsafe configurations. Our results indicate that a formal analysis guarantees safety with high confidence such that the robotic instruments operate within the safe workspace and avoid hazardous interaction with other anatomical structures.
Purpose Surgical simulations play an increasingly important role in surgeon education and developing algorithms that enable robots to perform surgical subtasks. To model anatomy, Finite Element Method (FEM) simulations have been held as the gold stan dard for calculating accurate soft-tissue deformation. Unfortunately, their accuracy is highly dependent on the simulation parameters, which can be difficult to obtain. Methods In this work, we investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results. Since FEMs are calculated from initial parameters and cannot directly incorporate observations, we propose to add a correction factor that accounts for the discrepancy between simulation and observations. We train a network to predict this correction factor. Results To evaluate our method, we use an open-source da Vinci Surgical System to probe a soft-tissue phantom and replay the interaction in simulation. We train the network to correct for the difference between the predicted mesh position and the measured point cloud. This results in 15-30% improvement in the mean distance, demonstrating the effectiveness of our approach across a large range of simulation parameters. Conclusion We show a first step towards a framework that synergistically combines the benefits of model-based simulation and real-time observations. It corrects discrepancies between simulation and the scene that results from inaccurate modeling parameters. This can provide a more accurate simulation environment for surgeons and better data with which to train algorithms.
In contrast to manned missions, the application of autonomous robots for space exploration missions decreases the safety concerns of the exploration missions while extending the exploration distance since returning transportation is not necessary for robotics missions. In addition, the employment of robots in these missions also decreases mission complexities and costs because there is no need for onboard life support systems: robots can withstand and operate in harsh conditions, for instance, extreme temperature, pressure, and radiation, where humans cannot survive. In this article, we introduce environments on Mars, review the existing autonomous driving techniques deployed on Earth, as well as explore technologies required to enable future commercial autonomous space robotic explorers. Last but not least, we also present that one of the urgent technical challenges for autonomous space explorers, namely, computing power onboard.
In minimal invasive surgery, it is important to rebuild and visualize the latest deformed shape of soft-tissue surfaces to mitigate tissue damages. This paper proposes an innovative Simultaneous Localization and Mapping (SLAM) algorithm for deformabl e dense reconstruction of surfaces using a sequence of images from a stereoscope. We introduce a warping field based on the Embedded Deformation (ED) nodes with 3D shapes recovered from consecutive pairs of stereo images. The warping field is estimated by deforming the last updated model to the current live model. Our SLAM system can: (1) Incrementally build a live model by progressively fusing new observations with vivid accurate texture. (2) Estimate the deformed shape of unobserved region with the principle As-Rigid-As-Possible. (3) Show the consecutive shape of models. (4) Estimate the current relative pose between the soft-tissue and the scope. In-vivo experiments with publicly available datasets demonstrate that the 3D models can be incrementally built for different soft-tissues with different deformations from sequences of stereo images obtained by laparoscopes. Results show the potential clinical application of our SLAM system for providing surgeon useful shape and texture information in minimal invasive surgery.
Vitreoretinal surgery is challenging even for expert surgeons owing to the delicate target tissues and the diminutive 7-mm-diameter workspace in the retina. In addition to improved dexterity and accuracy, robot assistance allows for (partial) task au tomation. In this work, we propose a strategy to automate the motion of the light guide with respect to the surgical instrument. This automation allows the instruments shadow to always be inside the microscopic view, which is an important cue for the accurate positioning of the instrument in the retina. We show simulations and experiments demonstrating that the proposed strategy is effective in a 700-point grid in the retina of a surgical phantom.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا