ترغب بنشر مسار تعليمي؟ اضغط هنا

A Kinematic Bottleneck Approach For Pose Regression of Flexible Surgical Instruments directly from Images

190   0   0.0 ( 0 )
 نشر من قبل Luca Sestini
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

3-D pose estimation of instruments is a crucial step towards automatic scene understanding in robotic minimally invasive surgery. Although robotic systems can potentially directly provide joint values, this information is not commonly exploited inside the operating room, due to its possible unreliability, limited access and the time-consuming calibration required, especially for continuum robots. For this reason, standard approaches for 3-D pose estimation involve the use of external tracking systems. Recently, image-based methods have emerged as promising, non-invasive alternatives. While many image-based approaches in the literature have shown accurate results, they generally require either a complex iterative optimization for each processed image, making them unsuitable for real-time applications, or a large number of manually-annotated images for efficient learning. In this paper we propose a self-supervised image-based method, exploiting, at training time only, the imprecise kinematic information provided by the robot. In order to avoid introducing time-consuming manual annotations, the problem is formulated as an auto-encoder, smartly bottlenecked by the presence of a physical model of the robotic instruments and surgical camera, forcing a separation between image background and kinematic content. Validation of the method was performed on semi-synthetic, phantom and in-vivo datasets, obtained using a flexible robotized endoscope, showing promising results for real-time image-based 3-D pose estimation of surgical instruments.



قيم البحث

اقرأ أيضاً

Voxel-based structures provide a modular, mechanically flexible periodic lattice which can be used as a soft robot through internal deformations. To engage these structures for robotic tasks, we use a finite element method to characterize the motion caused by deforming single degrees of freedom and develop a reduced kinematic model. We find that node translations propagate periodically along geometric planes within the lattice, and briefly show that translational modes dominate the energy usage of the actuators. The resulting kinematic model frames the structural deformations in terms of user-defined control and end effector nodes, which further reduces the model size. The derived Planes of Motion (POM) model can be equivalently used for forward and inverse kinematics, as demonstrated by the design of a tripod stable gait for a locomotive voxel robot and validation of the quasi-static model through physical experiments.
Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with sim2real transfer. Efficient and robust uncertainty quantification (UQ) in pose estim ators is critically needed in many robotic tasks. In this work, we propose a simple, efficient, and plug-and-play UQ method for 6-DoF object pose estimation. We ensemble 2-3 pre-trained models with different neural network architectures and/or training data sources, and compute their average pairwise disagreement against one another to obtain the uncertainty quantification. We propose four disagreement metrics, including a learned metric, and show that the average distance (ADD) is the best learning-free metric and it is only slightly worse than the learned metric, which requires labeled target data. Our method has several advantages compared to the prior art: 1) our method does not require any modification of the training process or the model inputs; and 2) it needs only one forward pass for each model. We evaluate the proposed UQ method on three tasks where our uncertainty quantification yields much stronger correlations with pose estimation errors than the baselines. Moreover, in a real robot grasping task, our method increases the grasping success rate from 35% to 90%.
146 - Xin Zhi , Weibang Bai , 2021
Miniaturized instruments are highly needed for robot assisted medical healthcare and treatment, especially for less invasive surgery as it empowers more flexible access to restricted anatomic intervention. But the robotic design is more challenging d ue to the contradictory needs of miniaturization and the capability of manipulating with a large dexterous workspace. Thus, kinematic parameter optimization is of great significance in this case. To this end, this paper proposes an approach based on dexterous workspace determination for designing a miniaturized tendon-driven surgical instrument under necessary restraints. The workspace determination is achieved by boundary determination and volume estimation with partition and least-squares polynomial fitting methods. The final robotic configuration with optimized kinematic parameters is proved to be eligible with a large enough dexterous workspace and targeted miniature size.
We propose a method for object-aware 3D egocentric pose estimation that tightly integrates kinematics modeling, dynamics modeling, and scene object information. Unlike prior kinematics or dynamics-based approaches where the two components are used di sjointly, we synergize the two approaches via dynamics-regulated training. At each timestep, a kinematic model is used to provide a target pose using video evidence and simulation state. Then, a prelearned dynamics model attempts to mimic the kinematic pose in a physics simulator. By comparing the pose instructed by the kinematic model against the pose generated by the dynamics model, we can use their misalignment to further improve the kinematic model. By factoring in the 6DoF pose of objects (e.g., chairs, boxes) in the scene, we demonstrate for the first time, the ability to estimate physically-plausible 3D human-object interactions using a single wearable camera. We evaluate our egocentric pose estimation method in both controlled laboratory settings and real-world scenarios.
204 - Xin Ye , Yezhou Yang 2020
Visual Indoor Navigation (VIN) task has drawn increasing attention from the data-driven machine learning communities especially with the recently reported success from learning-based methods. Due to the innate complexity of this task, researchers hav e tried approaching the problem from a variety of different angles, the full scope of which has not yet been captured within an overarching report. This survey first summarizes the representative work of learning-based approaches for the VIN task and then identifies and discusses lingering issues impeding the VIN performance, as well as motivates future research in these key areas worth exploring for the community.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا