ترغب بنشر مسار تعليمي؟ اضغط هنا

Variable impedance control and learning -- A review

67   0   0.0 ( 0 )
 نشر من قبل Fares Abu-Dakka Dr.
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Robots that physically interact with their surroundings, in order to accomplish some tasks or assist humans in their activities, require to exploit contact forces in a safe and proficient manner. Impedance control is considered as a prominent approach in robotics to avoid large impact forces while operating in unstructured environments. In such environments, the conditions under which the interaction occurs may significantly vary during the task execution. This demands robots to be endowed with on-line adaptation capabilities to cope with sudden and unexpected changes in the environment. In this context, variable impedance control arises as a powerful tool to modulate the robots behavior in response to variations in its surroundings. In this survey, we present the state-of-the-art of approaches devoted to variable impedance control from control and learning perspectives (separately and jointly). Moreover, we propose a new taxonomy for mechanical impedance based on variability, learning, and control. The objective of this survey is to put together the concepts and efforts that have been done so far in this field, and to describe advantages and disadvantages of each approach. The survey concludes with open issues in the field and an envisioned framework that may potentially solve them.

قيم البحث

اقرأ أيضاً

Many manipulation tasks require robots to interact with unknown environments. In such applications, the ability to adapt the impedance according to different task phases and environment constraints is crucial for safety and performance. Although many approaches based on deep reinforcement learning (RL) and learning from demonstration (LfD) have been proposed to obtain variable impedance skills on contact-rich manipulation tasks, these skills are typically task-specific and could be sensitive to changes in task settings. This paper proposes an inverse reinforcement learning (IRL) based approach to recover both the variable impedance policy and reward function from expert demonstrations. We explore different action space of the reward functions to achieve a more general representation of expert variable impedance skills. Experiments on two variable impedance tasks (Peg-in-Hole and Cup-on-Plate) were conducted in both simulations and on a real FANUC LR Mate 200iD/7L industrial robot. The comparison results with behavior cloning and force-based IRL proved that the learned reward function in the gain action space has better transferability than in the force space. Experiment videos are available at https://msc.berkeley.edu/research/impedance-irl.html.
Reinforcement Learning (RL) of contact-rich manipulation tasks has yielded impressive results in recent years. While many studies in RL focus on varying the observation space or reward model, few efforts focused on the choice of action space (e.g. jo int or end-effector space, position, velocity, etc.). However, studies in robot motion control indicate that choosing an action space that conforms to the characteristics of the task can simplify exploration and improve robustness to disturbances. This paper studies the effect of different action spaces in deep RL and advocates for Variable Impedance Control in End-effector Space (VICES) as an advantageous action space for constrained and contact-rich tasks. We evaluate multiple action spaces on three prototypical manipulation tasks: Path Following (task with no contact), Door Opening (task with kinematic constraints), and Surface Wiping (task with continuous contact). We show that VICES improves sample efficiency, maintains low energy consumption, and ensures safety across all three experimental setups. Further, RL policies learned with VICES can transfer across different robot models in simulation, and from simulation to real for the same robot. Further information is available at https://stanfordvl.github.io/vices.
57 - Fan Wu , Matthew Howard 2020
Compliant robotics have seen successful applications in energy efficient locomotion and cyclic manipulation. However, exploitation of variable physical impedance for energy efficient sequential movements has not been extensively addressed. This work employs a hierarchical approach to encapsulate low-level optimal control for sub-movement generation into an outer loop of iterative policy improvement, thereby leveraging the benefits of both optimal control and reinforcement learning. The framework enables optimizing efficiency trade-off for minimal energy expenses in a model-free manner, by taking account of cost function weighting, variable impedance exploitation, and transition timing -- which are associated with the skill of compliance. The effectiveness of the proposed method is evaluated using two consecutive reaching tasks on a variable impedance actuator. The results demonstrate significant energy saving by improving the skill of compliance, with an electrical consumption reduction of about 30% measured in a physical robot experiment.
The natural impedance, or dynamic relationship between force and motion, of a human operator can determine the stability of exoskeletons that use interaction-torque feedback to amplify human strength. While human impedance is typically modelled as a linear system, our experiments on a single-joint exoskeleton testbed involving 10 human subjects show evidence of nonlinear behavior: a low-frequency asymptotic phase for the dynamic stiffness of the human that is different than the expected zero, and an unexpectedly consistent damping ratio as the stiffness and inertia vary. To explain these observations, this paper considers a new frequency-domain model of the human joint dynamics featuring complex value stiffness comprising a real stiffness term and a hysteretic damping term. Using a statistical F-test we show that the hysteretic damping term is not only significant but is even more significant than the linear damping term. Further analysis reveals a linear trend linking hysteretic damping and the real part of the stiffness, which allows us to simplify the complex stiffness model down to a 1-parameter system. Then, we introduce and demonstrate a customizable fractional-order controller that exploits this hysteretic damping behavior to improve strength amplification bandwidth while maintaining stability, and explore a tuning approach which ensures that this stability property is robust to muscle co-contraction for each individual.
Robust dynamic interactions are required to move robots in daily environments alongside humans. Optimisation and learning methods have been used to mimic and reproduce human movements. However, they are often not robust and their generalisation is li mited. This work proposed a hierarchical control architecture for robot manipulators and provided capabilities of reproducing human-like motions during unknown interaction dynamics. Our results show that the reproduced end-effector trajectories can preserve the main characteristics of the initial human motion recorded via a motion capture system, and are robust against external perturbations. The data indicate that some detailed movements are hard to reproduce due to the physical limits of the hardware that cannot reach the same velocity recorded in human movements. Nevertheless, these technical problems can be addressed by using better hardware and our proposed algorithms can still be applied to produce imitated motions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا