No Arabic abstract
This paper presents a passive control method for multiple degrees of freedom in a soft pneumatic robot through the combination of flow resistor tubes with series inflatable actuators. We designed and developed these 3D printed resistors based on the pressure drop principle of multiple capillary orifices, which allows a passive control of its sequential activation from a single source of pressure. Our design fits in standard tube connectors, making it easy to adopt it on any other type of actuator with pneumatic inlets. We present its characterization of pressure drop and evaluation of the activation sequence for series and parallel circuits of actuators. Moreover, we present an application for the assistance of postural transition from lying to sitting. We embedded it in a wearable garment robot-suit designed for infants with cerebral palsy. Then, we performed the test with a dummy baby for emulating the upper-body motion control. The results show a sequential motion control of the sitting and lying transitions validating the proposed system for flow control and its application on the robot-suit.
This paper presents a vision-based sensing approach for a soft linear actuator, which is equipped with an integrated camera. The proposed vision-based sensing pipeline predicts the three-dimensional position of a point of interest on the actuator. To train and evaluate the algorithm, predictions are compared to ground truth data from an external motion capture system. An off-the-shelf distance sensor is integrated in a similar actuator and its performance is used as a baseline for comparison. The resulting sensing pipeline runs at 40 Hz in real-time on a standard laptop and is additionally used for closed loop elongation control of the actuator. It is shown that the approach can achieve comparable accuracy to the distance sensor.
We present a high-bandwidth, lightweight, and nonlinear output tracking technique for soft actuators that combines parsimonious recursive layers for forward output predictions and online optimization using Newton-Raphson. This technique allows for reduced model sizes and increased control loop frequencies when compared with conventional RNN models. Experimental results of this controller prototype on a single soft actuator with soft positional sensors indicate effective tracking of referenced spatial trajectories and rejection of mechanical and electromagnetic disturbances. These are evidenced by root mean squared path tracking errors (RMSE) of 1.8mm using a fully connected (FC) substructure, 1.62mm using a gated recurrent unit (GRU) and 2.11mm using a long short term memory (LSTM) unit, all averaged over three tasks. Among these models, the highest flash memory requirement is 2.22kB enabling co-location of controller and actuator.
Modular soft robots combine the strengths of two traditionally separate areas of robotics. As modular robots, they can show robustness to individual failure and reconfigurability; as soft robots, they can deform and undergo large shape changes in order to adapt to their environment, and have inherent human safety. However, for sensing and communication these robots also combine the challenges of both: they require solutions that are scalable (low cost and complexity) and efficient (low power) to enable collectives of large numbers of robots, and these solutions must also be able to interface with the high extension ratio elastic bodies of soft robots. In this work, we seek to address these challenges using acoustic signals produced by piezoelectric surface transducers that are cheap, simple, and low power, and that not only integrate with but also leverage the elastic robot skins for signal transmission. Importantly, to further increase scalability, the transducers exhibit multi-functionality made possible by a relatively flat frequency response across the audible and ultrasonic ranges. With minimal hardware, they enable directional contact-based communication, audible-range communication at a distance, and exteroceptive sensing. We demonstrate a subset of the decentralized collective behaviors these functions make possible with multi-robot hardware implementations. The use of acoustic waves in this domain is shown to provide distinct advantages over existing solutions.
We present a user-friendly interface to teleoperate a soft robot manipulator in a complex environment. Key components of the system include a manipulator with a grasping end-effector that grows via tip eversion, gesture-based control, and haptic display to the operator for feedback and guidance. In the initial work, the operator uses the soft robot to build a tower of blocks, and future works will extend this to shared autonomy scenarios in which the human operator and robot intelligence are both necessary for task completion.
Semi-autonomous telerobotic systems allow both humans and robots to exploit their strengths, while enabling personalized execution of a task. However, for new soft robots with degrees of freedom dissimilar to those of human operators, it is unknown how the control of a task should be divided between the human and robot. This work presents a set of interaction paradigms between a human and a soft growing robot manipulator, and demonstrates them in both real and simulated scenarios. The robot can grow and retract by eversion and inversion of its tubular body, a property we exploit to implement interaction paradigms. We implemented and tested six different paradigms of human-robot interaction, beginning with full teleoperation and gradually adding automation to various aspects of the task execution. All paradigms were demonstrated by two expert and two naive operators. Results show that humans and the soft robot manipulator can split control along degrees of freedom while acting simultaneously. In the simple pick-and-place task studied in this work, performance improves as the control is gradually given to the robot, because the robot can correct certain human errors. However, human engagement and enjoyment may be maximized when the task is at least partially shared. Finally, when the human operator is assisted by haptic feedback based on soft robot position errors, we observed that the improvement in performance is highly dependent on the expertise of the human operator.