ترغب بنشر مسار تعليمي؟ اضغط هنا

Autonomous Robotic Screening of Tubular Structures based only on Real-Time Ultrasound Imaging Feedback

164   0   0.0 ( 0 )
 نشر من قبل Zhongliang Jiang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Ultrasound (US) imaging is widely employed for diagnosis and staging of peripheral vascular diseases (PVD), mainly due to its high availability and the fact it does not emit radiation. However, high inter-operator variability and a lack of repeatability of US image acquisition hinder the implementation of extensive screening programs. To address this challenge, we propose an end-to-end workflow for automatic robotic US screening of tubular structures using only the real-time US imaging feedback. We first train a U-Net for real-time segmentation of the vascular structure from cross-sectional US images. Then, we represent the detected vascular structure as a 3D point cloud and use it to estimate the longitudinal axis of the target tubular structure and its mean radius by solving a constrained non-linear optimization problem. Iterating the previous processes, the US probe is automatically aligned to the orientation normal to the target tubular tissue and adjusted online to center the tracked tissue based on the spatial calibration. The real-time segmentation result is evaluated both on a phantom and in-vivo on brachial arteries of volunteers. In addition, the whole process is validated both in simulation and physical phantoms. The mean absolute radius error and orientation error ($pm$ SD) in the simulation are $1.16pm0.1~mm$ and $2.7pm3.3^{circ}$, respectively. On a gel phantom, these errors are $1.95pm2.02~mm$ and $3.3pm2.4^{circ}$. This shows that the method is able to automatically screen tubular tissues with an optimal probe orientation (i.e. normal to the vessel) and at the same to accurately estimate the mean radius, both in real-time.



قيم البحث

اقرأ أيضاً

A key capability for autonomous underground mining vehicles is real-time accurate localisation. While significant progress has been made, currently deployed systems have several limitations ranging from dependence on costly additional infrastructure to failure of both visual and range sensor-based techniques in highly aliased or visually challenging environments. In our previous work, we presented a lightweight coarse vision-based localisation system that could map and then localise to within a few metres in an underground mining environment. However, this level of precision is insufficient for providing a cheaper, more reliable vision-based automation alternative to current range sensor-based systems. Here we present a new precision localisation system dubbed LookUP, which learns a neural-network-based pixel sampling strategy for estimating homographies based on ceiling-facing cameras without requiring any manual labelling. This new system runs in real time on limited computation resource and is demonstrated on two different underground mine sites, achieving real time performance at ~5 frames per second and a much improved average localisation error of ~1.2 metre.
Force control is essential for medical robots when touching and contacting the patients body. To increase the stability and efficiency in force control, an Adaption Module could be used to adjust the parameters for different contact situations. We pr opose an adaptive controller with an Adaption Module which can produce control parameters based on force feedback and real-time stiffness detection. We develop methods for learning the optimal policies by value iteration and using the data generated from those policies to train the Adaptive Module. We test this controller on different zones of a persons arm. All the parameters used in practice are learned from data. The experiments show that the proposed adaptive controller can exert various target forces on different zones of the arm with fast convergence and good stability.
In keyhole interventions, surgeons rely on a colleague to act as a camera assistant when their hands are occupied with surgical instruments. This often leads to reduced image stability, increased task completion times and sometimes errors. Robotic en doscope holders (REHs), controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling increases the cognitive load of the surgeon, hindering their widespread clinical acceptance. We propose that REHs collaborate with the operating surgeon via semantically rich instructions that closely resemble those issued to a human camera assistant, such as focus on my right-hand instrument. As a proof-of-concept, we present a novel system that paves the way towards a synergistic interaction between surgeons and REHs. The proposed platform allows the surgeon to perform a bi-manual coordination and navigation task, while a robotic arm autonomously performs various endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation, and a novel visual servoing approach that ensures smooth and correct motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. Through successful application in a medically proven bi-manual coordination and navigation task, the framework has shown to be a promising starting point towards broader clinical adoption of REHs.
Robotic three-dimensional (3D) ultrasound (US) imaging has been employed to overcome the drawbacks of traditional US examinations, such as high inter-operator variability and lack of repeatability. However, object movement remains a challenge as unex pected motion decreases the quality of the 3D compounding. Furthermore, attempted adjustment of objects, e.g., adjusting limbs to display the entire limb artery tree, is not allowed for conventional robotic US systems. To address this challenge, we propose a vision-based robotic US system that can monitor the objects motion and automatically update the sweep trajectory to provide 3D compounded images of the target anatomy seamlessly. To achieve these functions, a depth camera is employed to extract the manually planned sweep trajectory after which the normal direction of the object is estimated using the extracted 3D trajectory. Subsequently, to monitor the movement and further compensate for this motion to accurately follow the trajectory, the position of firmly attached passive markers is tracked in real-time. Finally, a step-wise compounding was performed. The experiments on a gel phantom demonstrate that the system can resume a sweep when the object is not stationary during scanning.
64 - Yangxin Xu , Keyu Li , Ziqi Zhao 2021
Active wireless capsule endoscopy (WCE) based on simultaneous magnetic actuation and localization (SMAL) techniques holds great promise for improving diagnostic accuracy, reducing examination time and relieving operator burden. To date, the rotating magnetic actuation methods have been constrained to use a continuously rotating permanent magnet. In this paper, we first propose the reciprocally rotating magnetic actuation (RRMA) approach for active WCE to enhance patient safety. We first show how to generate a desired reciprocally rotating magnetic field for capsule actuation, and provide a theoretical analysis of the potential risk of causing volvulus due to the capsule motion. Then, an RRMA-based SMAL workflow is presented to automatically propel a capsule in an unknown tubular environment. We validate the effectiveness of our method in real-world experiments to automatically propel a robotic capsule in an ex-vivo pig colon. The experiment results show that our approach can achieve efficient and robust propulsion of the capsule with an average moving speed of $2.48 mm/s$ in the pig colon, and demonstrate the potential of using RRMA to enhance patient safety, reduce the inspection time, and improve the clinical acceptance of this technology.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا