ترغب بنشر مسار تعليمي؟ اضغط هنا

Reconstruction of Backbone Curves for Snake Robots

69   0   0.0 ( 0 )
 نشر من قبل Tianyu Wang
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Snake robots composed of alternating single-axis pitch and yaw joints have many internal degrees of freedom, which make them capable of versatile three-dimensional locomotion. In motion planning process, snake robot motions are often designed kinematically by a chronological sequence of continuous backbone curves that capture desired macroscopic shapes of the robot. However, as the geometric arrangement of single-axis rotary joints creates constraints on the rotations in the robot, it is challenging for the robot to reconstruct an arbitrary 3D curve. When the robot configuration does not accurately achieve the desired shapes defined by these backbone curves, the robot can have unexpected contacts with the environment, such that the robot does not achieve the desired motion. In this work, we propose a method for snake robots to reconstruct desired backbone curves by posing an optimization problem that exploits the robots geometric structure. We verified that our method enables fast and accurate curve-configuration



قيم البحث

اقرأ أيضاً

The selection of mobility modes for robot navigation consists of various trade-offs. Snake robots are ideal for traversing through constrained environments such as pipes, cluttered and rough terrain, whereas bipedal robots are more suited for structu red environments such as stairs. Finally, quadruped robots are more stable than bipeds and can carry larger payloads than snakes and bipeds but struggle to navigate soft soil, sand, ice, and constrained environments. A reconfigurable robot can achieve the best of all worlds. Unfortunately, state-of-the-art reconfigurable robots rely on the rearrangement of modules through complicated mechanisms to dissemble and assemble at different places, increasing the size, weight, and power (SWaP) requirements. We propose Reconfigurable Quadrupedal-Bipedal Snake Robots (ReQuBiS), which can transform between mobility modes without rearranging modules. Hence, requiring just a single modification mechanism. Furthermore, our design allows the robot to split into two agents to perform tasks in parallel for biped and snake mobility. Experimental results demonstrate these mobility capabilities in snake, quadruped, and biped modes and transitions between them.
Reliable and real-time 3D reconstruction and localization functionality is a crucial prerequisite for the navigation of actively controlled capsule endoscopic robots as an emerging, minimally invasive diagnostic and therapeutic technology for use in the gastrointestinal (GI) tract. In this study, we propose a fully dense, non-rigidly deformable, strictly real-time, intraoperative map fusion approach for actively controlled endoscopic capsule robot applications which combines magnetic and vision-based localization, with non-rigid deformations based frame-to-model map fusion. The performance of the proposed method is demonstrated using four different ex-vivo porcine stomach models. Across different trajectories of varying speed and complexity, and four different endoscopic cameras, the root mean square surface reconstruction errors 1.58 to 2.17 cm.
The study of 3D hyperspectral image (HSI) reconstruction refers to the inverse process of snapshot compressive imaging, during which the optical system, e.g., the coded aperture snapshot spectral imaging (CASSI) system, captures the 3D spatial-spectr al signal and encodes it to a 2D measurement. While numerous sophisticated neural networks have been elaborated for end-to-end reconstruction, trade-offs still need to be made among performance, efficiency (training and inference time), and feasibility (the ability of restoring high resolution HSI on limited GPU memory). This raises a challenge to design a new baseline to conjointly meet the above requirements. In this paper, we fill in this blank by proposing a Spatial/Spectral Invariant Residual U-Net, namely SSI-ResU-Net. It differentiates with U-Net in three folds--1) scale/spectral-invariant learning, 2) nested residual learning, and 3) computational efficiency. Benefiting from these three modules, the proposed SSI-ResU-Net outperforms the current state-of-the-art method TSA-Net by over 3 dB in PSNR and 0.036 in SSIM while only using 2.82% trainable parameters. To the greatest extent, SSI-ResU-Net achieves competing performance with over 77.3% reduction in terms of floating-point operations (FLOPs), which for the first time, makes high-resolution HSI reconstruction feasible under practical application scenarios. Code and pre-trained models are made available at https://github.com/Jiamian-Wang/HSI_baseline.
Robotic materials are multi-robot systems formulated to leverage the low-order computation and actuation of the constituents to manipulate the high-order behavior of the entire material. We study the behaviors of ensembles composed of smart active pa rticles, smarticles. Smarticles are small, low cost robots equipped with basic actuation and sensing abilities that are individually incapable of rotating or displacing. We demonstrate that a supersmarticle, composed of many smarticles constrained within a bounding membrane, can harness the internal collisions of the robotic material among the constituents and the membrane to achieve diffusive locomotion. The emergent diffusion can be directed by modulating the robotic material properties in response to a light source, analogous to biological phototaxis. The light source introduces asymmetries within the robotic material, resulting in modified populations of interaction modes and dynamics which ultimately result in supersmarticle biased locomotion. We present experimental methods and results for the robotic material which moves with a directed displacement in response to a light source.
This work creates a model of the value of different external viewpoints of a robot performing tasks. The current state of the practice is to use a teleoperated assistant robot to provide a view of a task being performed by a primary robot; however, t he choice of viewpoints is ad hoc and does not always lead to improved performance. This research applies a psychomotor approach to develop a model of the relative quality of external viewpoints using Gibsonian affordances. In this approach, viewpoints for the affordances are rated based on the psychomotor behavior of human operators and clustered into manifolds of viewpoints with the equivalent value. The value of 30 viewpoints is quantified in a study with 31 expert robot operators for 4 affordances (Reachability, Passability, Manipulability, and Traversability) using a computer-based simulator of two robots. The adjacent viewpoints with similar values are clustered into ranked manifolds using agglomerative hierarchical clustering. The results show the validity of the affordance-based approach by confirming that there are manifolds of statistically significantly different viewpoint values, viewpoint values are statistically significantly dependent on the affordances, and viewpoint values are independent of a robot. Furthermore, the best manifold for each affordance provides a statistically significant improvement with a large Cohens d effect size (1.1-2.3) in performance (improving time by 14%-59% and reducing errors by 87%-100%) and improvement in performance variation over the worst manifold. This model will enable autonomous selection of the best possible viewpoint and path planning for the assistant robot.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا