Do you want to publish a course? Click here

Vision-Based Proprioceptive Sensing for Soft Inflatable Actuators

84   0   0.0 ( 0 )
 Added by Peter Werner
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

This paper presents a vision-based sensing approach for a soft linear actuator, which is equipped with an integrated camera. The proposed vision-based sensing pipeline predicts the three-dimensional position of a point of interest on the actuator. To train and evaluate the algorithm, predictions are compared to ground truth data from an external motion capture system. An off-the-shelf distance sensor is integrated in a similar actuator and its performance is used as a baseline for comparison. The resulting sensing pipeline runs at 40 Hz in real-time on a standard laptop and is additionally used for closed loop elongation control of the actuator. It is shown that the approach can achieve comparable accuracy to the distance sensor.



rate research

Read More

139 - D. S. Drew , M. Devlin , E. Hawkes 2021
Modular soft robots combine the strengths of two traditionally separate areas of robotics. As modular robots, they can show robustness to individual failure and reconfigurability; as soft robots, they can deform and undergo large shape changes in order to adapt to their environment, and have inherent human safety. However, for sensing and communication these robots also combine the challenges of both: they require solutions that are scalable (low cost and complexity) and efficient (low power) to enable collectives of large numbers of robots, and these solutions must also be able to interface with the high extension ratio elastic bodies of soft robots. In this work, we seek to address these challenges using acoustic signals produced by piezoelectric surface transducers that are cheap, simple, and low power, and that not only integrate with but also leverage the elastic robot skins for signal transmission. Importantly, to further increase scalability, the transducers exhibit multi-functionality made possible by a relatively flat frequency response across the audible and ultrasonic ranges. With minimal hardware, they enable directional contact-based communication, audible-range communication at a distance, and exteroceptive sensing. We demonstrate a subset of the decentralized collective behaviors these functions make possible with multi-robot hardware implementations. The use of acoustic waves in this domain is shown to provide distinct advantages over existing solutions.
This paper presents a passive control method for multiple degrees of freedom in a soft pneumatic robot through the combination of flow resistor tubes with series inflatable actuators. We designed and developed these 3D printed resistors based on the pressure drop principle of multiple capillary orifices, which allows a passive control of its sequential activation from a single source of pressure. Our design fits in standard tube connectors, making it easy to adopt it on any other type of actuator with pneumatic inlets. We present its characterization of pressure drop and evaluation of the activation sequence for series and parallel circuits of actuators. Moreover, we present an application for the assistance of postural transition from lying to sitting. We embedded it in a wearable garment robot-suit designed for infants with cerebral palsy. Then, we performed the test with a dummy baby for emulating the upper-body motion control. The results show a sequential motion control of the sitting and lying transitions validating the proposed system for flow control and its application on the robot-suit.
We present a high-bandwidth, lightweight, and nonlinear output tracking technique for soft actuators that combines parsimonious recursive layers for forward output predictions and online optimization using Newton-Raphson. This technique allows for reduced model sizes and increased control loop frequencies when compared with conventional RNN models. Experimental results of this controller prototype on a single soft actuator with soft positional sensors indicate effective tracking of referenced spatial trajectories and rejection of mechanical and electromagnetic disturbances. These are evidenced by root mean squared path tracking errors (RMSE) of 1.8mm using a fully connected (FC) substructure, 1.62mm using a gated recurrent unit (GRU) and 2.11mm using a long short term memory (LSTM) unit, all averaged over three tasks. Among these models, the highest flash memory requirement is 2.22kB enabling co-location of controller and actuator.
Decentralized deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. This letter proposes a vision-based detection and tracking algorithm that enables groups of drones to navigate without communication or visual markers. We employ a convolutional neural network to detect and localize nearby agents onboard the quadcopters in real-time. Rather than manually labeling a dataset, we automatically annotate images to train the neural network using background subtraction by systematically flying a quadcopter in front of a static camera. We use a multi-agent state tracker to estimate the relative positions and velocities of nearby agents, which are subsequently fed to a flocking algorithm for high-level control. The drones are equipped with multiple cameras to provide omnidirectional visual inputs. The camera setup ensures the safety of the flock by avoiding blind spots regardless of the agent configuration. We evaluate the approach with a group of three real quadcopters that are controlled using the proposed vision-based flocking algorithm. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions. The source code, image dataset, and trained detection model are available at https://github.com/lis-epfl/vswarm.
Purpose Surgical simulations play an increasingly important role in surgeon education and developing algorithms that enable robots to perform surgical subtasks. To model anatomy, Finite Element Method (FEM) simulations have been held as the gold standard for calculating accurate soft-tissue deformation. Unfortunately, their accuracy is highly dependent on the simulation parameters, which can be difficult to obtain. Methods In this work, we investigate how live data acquired during any robotic endoscopic surgical procedure may be used to correct for inaccurate FEM simulation results. Since FEMs are calculated from initial parameters and cannot directly incorporate observations, we propose to add a correction factor that accounts for the discrepancy between simulation and observations. We train a network to predict this correction factor. Results To evaluate our method, we use an open-source da Vinci Surgical System to probe a soft-tissue phantom and replay the interaction in simulation. We train the network to correct for the difference between the predicted mesh position and the measured point cloud. This results in 15-30% improvement in the mean distance, demonstrating the effectiveness of our approach across a large range of simulation parameters. Conclusion We show a first step towards a framework that synergistically combines the benefits of model-based simulation and real-time observations. It corrects discrepancies between simulation and the scene that results from inaccurate modeling parameters. This can provide a more accurate simulation environment for surgeons and better data with which to train algorithms.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا