Do you want to publish a course? Click here

Optimization Algorithm for Feedback and Feedforward Policies towards Robot Control Robust to Sensing Failures

87   0   0.0 ( 0 )
 Added by Taisuke Kobayashi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Model-free or learning-based control, in particular, reinforcement learning (RL), is expected to be applied for complex robotic tasks. Traditional RL requires a policy to be optimized is state-dependent, that means, the policy is a kind of feedback (FB) controllers. Due to the necessity of correct state observation in such a FB controller, it is sensitive to sensing failures. To alleviate this drawback of the FB controllers, feedback error learning integrates one of them with a feedforward (FF) controller. RL can be improved by dealing with the FB/FF policies, but to the best of our knowledge, a methodology for learning them in a unified manner has not been developed. In this paper, we propose a new optimization problem for optimizing both the FB/FF policies simultaneously. Inspired by control as inference, the optimization problem considers minimization/maximization of divergences between trajectory, predicted by the composed policy and a stochastic dynamics model, and optimal/non-optimal trajectories. By approximating the stochastic dynamics model using variational method, we naturally derive a regularization between the FB/FF policies. In numerical simulations and a robot experiment, we verified that the proposed method can stably optimize the composed policy even with the different learning law from the traditional RL. In addition, we demonstrated that the FF policy is robust to the sensing failures and can hold the optimal motion. Attached video is also uploaded on youtube: https://youtu.be/zLL4uXIRmrE

rate research

Read More

Domain randomization (DR) is a successful technique for learning robust policies for robot systems, when the dynamics of the target robot system are unknown. The success of policies trained with domain randomization however, is highly dependent on the correct selection of the randomization distribution. The majority of success stories typically use real world data in order to carefully select the DR distribution, or incorporate real world trajectories to better estimate appropriate randomization distributions. In this paper, we consider the problem of finding good domain randomization parameters for simulation, without prior access to data from the target system. We explore the use of gradient-based search methods to learn a domain randomization with the following properties: 1) The trained policy should be successful in environments sampled from the domain randomization distribution 2) The domain randomization distribution should be wide enough so that the experience similar to the target robot system is observed during training, while addressing the practicality of training finite capacity models. These two properties aim to ensure the trajectories encountered in the target system are close to those observed during training, as existing methods in machine learning are better suited for interpolation than extrapolation. We show how adapting the domain randomization distribution while training context-conditioned policies results in improvements on jump-start and asymptotic performance when transferring a learned policy to the target environment.
120 - Yulong Dong , Xiang Meng , Lin Lin 2019
Quantum variational algorithms have garnered significant interest recently, due to their feasibility of being implemented and tested on noisy intermediate scale quantum (NISQ) devices. We examine the robustness of the quantum approximate optimization algorithm (QAOA), which can be used to solve certain quantum control problems, state preparation problems, and combinatorial optimization problems. We demonstrate that the error of QAOA simulation can be significantly reduced by robust control optimization techniques, specifically, by sequential convex programming (SCP), to ensure error suppression in situations where the source of the error is known but not necessarily its magnitude. We show that robust optimization improves both the objective landscape of QAOA as well as overall circuit fidelity in the presence of coherent errors and errors in initial state preparation.
Bayesian optimization is an efficient nonlinear optimization method where the queries are carefully selected to gather information about the optimum location. Thus, in the context of policy search, it has been called active policy search. The main ingredients of Bayesian optimization for sample efficiency are the probabilistic surrogate model and the optimal decision heuristics. In this work, we exploit those to provide robustness to different issues for policy search algorithms. We combine several methods and show how their interaction works better than the sum of the parts. First, to deal with input noise and provide a safe and repeatable policy we use an improved version of unscented Bayesian optimization. Then, to deal with mismodeling errors and improve exploration we use stochastic meta-policies for query selection and an adaptive kernel. We compare the proposed algorithm with previous results in several optimization benchmarks and robot tasks, such as pushing objects with a robot arm, or path finding with a rover.
A new generation of automated bin picking systems using deep learning is evolving to support increasing demand for e-commerce. To accommodate a wide variety of products, many automated systems include multiple gripper types and/or tool changers. However, for some objects, sequential grasp failures are common: when a computed grasp fails to lift and remove the object, the bin is often left unchanged; as the sensor input is consistent, the system retries the same grasp over and over, resulting in a significant reduction in mean successful picks per hour (MPPH). Based on an empirical study of sequential failures, we characterize a class of sequential failure objects (SFOs) -- objects prone to sequential failures based on a novel taxonomy. We then propose three non-Markov picking policies that incorporate memory of past failures to modify subsequent actions. Simulation experiments on SFO models and the EGAD dataset suggest that the non-Markov policies significantly outperform the Markov policy in terms of the sequential failure rate and MPPH. In physical experiments on 50 heaps of 12 SFOs the most effective Non-Markov policy increased MPPH over the Dex-Net Markov policy by 107%.
This paper presents an impedance control architecture for an electroacoustic absorber combining both a feedforward and feedback microphone-based system on a current driven loudspeaker. Feedforward systems enable good performance for direct impedance control. However, inaccuracies in the required actuator model can lead to a loss of passivity, which can cause unstable behaviors. The feedback contribution allows the absorber to better handle model errors and still achieve an accurate impedance. Numerical and experimental studies were conducted to compare this new architecture against a state-of-the-art feedforward control method.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا