ﻻ يوجد ملخص باللغة العربية
It is challenging to control a soft robot, where reinforcement learning methods have been applied with promising results. However, due to the poor sample efficiency, reinforcement learning methods require a large collection of training data, which limits their applications. In this paper, we propose a Q-learning controller for a physical soft robot, in which pre-trained models using data from a rough simulator are applied to improve the performance of the controller. We implement the method on our soft robot, i.e., Honeycomb Pneumatic Network (HPN) arm. The experiments show that the usage of pre-trained models can not only reduce the amount of the real-world training data, but also greatly improve its accuracy and convergence rate.
This paper presents an offset-free model predictive controller for fast and accurate control of a spherical soft robotic arm. In this control scheme, a linear model is combined with an online disturbance estimation technique to systematically compens
Soft robots promise improved safety and capability over rigid robots when deployed in complex, delicate, and dynamic environments. However, the infinite degrees of freedom and highly nonlinear dynamics of these systems severely complicate their model
Standardized evaluation measures have aided in the progress of machine learning approaches in disciplines such as computer vision and machine translation. In this paper, we make the case that robotic learning would also benefit from benchmarking, and
The compliance of soft robotic arms renders the development of accurate kinematic & dynamical models especially challenging. The most widely used model in soft robotic kinematics assumes Piecewise Constant Curvature (PCC). However, PCC fails to effec
A technological revolution is occurring in the field of robotics with the data-driven deep learning technology. However, building datasets for each local robot is laborious. Meanwhile, data islands between local robots make data unable to be utilized