No Arabic abstract
Voxel-based structures provide a modular, mechanically flexible periodic lattice which can be used as a soft robot through internal deformations. To engage these structures for robotic tasks, we use a finite element method to characterize the motion caused by deforming single degrees of freedom and develop a reduced kinematic model. We find that node translations propagate periodically along geometric planes within the lattice, and briefly show that translational modes dominate the energy usage of the actuators. The resulting kinematic model frames the structural deformations in terms of user-defined control and end effector nodes, which further reduces the model size. The derived Planes of Motion (POM) model can be equivalently used for forward and inverse kinematics, as demonstrated by the design of a tripod stable gait for a locomotive voxel robot and validation of the quasi-static model through physical experiments.
Soft modular robots enable more flexibility and safer interaction with the changing environment than traditional robots. However, it has remained challenging to create deformable connectors that can be integrated into soft machines. In this work, we propose a flexible connector for soft modular robots based on micropatterned intersurface jamming. The connector is composed of micropatterned dry adhesives made by silicone rubber and a flexible main body with inflatable chambers for active engagement and disengagement. Through connection force tests, we evaluate the characteristics of the connector both in the linear direction and under rotational disruptions. The connector can stably support an average maximum load of 22 N (83 times the connectors body weight) linearly and 10.86 N under planar rotation. The proposed connector demonstrates the potential to create a robust connection between soft modular robots without raising the systems overall stiffness; thus guarantees high flexibility of the robotic system.
This paper tackles a friction compensation problem without using a friction model. The unique feature of the proposed friction observer is that the nominal motor-side signal is fed back into the controller instead of the measured signal. By doing so, asymptotic stability and passivity of the controller are maintained. Another advantage of the proposed observer is that it provides a clear understanding for the stiction compensation which is hard to be captured in model-free approaches. This allows to design observers that do not overcompensate for the stiction. The proposed scheme is validated through simulations and experiments.
3-D pose estimation of instruments is a crucial step towards automatic scene understanding in robotic minimally invasive surgery. Although robotic systems can potentially directly provide joint values, this information is not commonly exploited inside the operating room, due to its possible unreliability, limited access and the time-consuming calibration required, especially for continuum robots. For this reason, standard approaches for 3-D pose estimation involve the use of external tracking systems. Recently, image-based methods have emerged as promising, non-invasive alternatives. While many image-based approaches in the literature have shown accurate results, they generally require either a complex iterative optimization for each processed image, making them unsuitable for real-time applications, or a large number of manually-annotated images for efficient learning. In this paper we propose a self-supervised image-based method, exploiting, at training time only, the imprecise kinematic information provided by the robot. In order to avoid introducing time-consuming manual annotations, the problem is formulated as an auto-encoder, smartly bottlenecked by the presence of a physical model of the robotic instruments and surgical camera, forcing a separation between image background and kinematic content. Validation of the method was performed on semi-synthetic, phantom and in-vivo datasets, obtained using a flexible robotized endoscope, showing promising results for real-time image-based 3-D pose estimation of surgical instruments.
Continuum and soft robots can leverage complex actuator shapes to take on useful shapes while actuating only a few of their many degrees of freedom. Continuum robots that also grow increase the range of potential shapes that can be actuated and enable easier access to constrained environments. Existing models for describing the complex kinematics involved in general actuation of continuum robots rely on simulation or well-behaved stress-strain relationships, but the non-linear behavior of the thin-walled inflated-beams used in growing robots makes these techniques difficult to apply. Here we derive kinematic models of single, generally routed tendon paths on a soft pneumatic backbone of inextensible but flexible material from geometric relationships alone. This allows for forward modeling of the resulting shapes with only knowledge of the geometry of the system. We show that this model can accurately predict the shape of the whole robot body and how the model changes with actuation type. We also demonstrate the use of this kinematic model for inverse design, where actuator designs are found based on desired final robot shapes. We deploy these designed actuators on soft pneumatic growing robots to show the benefits of simultaneous growth and shape change.
Highway driving invariably combines high speeds with the need to interact closely with other drivers. Prediction methods enable autonomous vehicles (AVs) to anticipate drivers future trajectories and plan accordingly. Kinematic methods for prediction have traditionally ignored the presence of other drivers, or made predictions only for a limited set of scenarios. Data-driven approaches fill this gap by learning from large datasets to predict trajectories in general scenarios. While they achieve high accuracy, they also lose the interpretability and tools for model validation enjoyed by kinematic methods. This letter proposes a novel kinematic model to describe car-following and lane change behavior, and extends it to predict trajectories in general scenarios. Experiments on highway datasets under varied sensing conditions demonstrate that the proposed method outperforms state-of-the-art methods.