This paper studies the model of the probe-drogue aerial refueling system under aerodynamic disturbances, and proposes a docking control method based on terminal iterative learning control to compensate for the docking errors caused by aerodynamic disturbances. The designed controller works as an additional unit for the trajectory generation function of the original autopilot system. Simulations based on our previously published simulation environment show that the proposed control method has a fast learning speed to achieve a successful docking control under aerodynamic disturbances including the bow wave effect.
Optimal and Learning Control for Autonomous Robots has been taught in the Robotics, Systems and Controls Masters at ETH Zurich with the aim to teach optimal control and reinforcement learning for closed loop control problems from a unified point of view. The starting point is the formulation of of an optimal control problem and deriving the different types of solutions and algorithms from there. These lecture notes aim at supporting this unified view with a unified notation wherever possible, and a bit of a translation help to compare the terminology and notation in the different fields. The course assumes basic knowledge of Control Theory, Linear Algebra and Stochastic Calculus.
Enforcing safety on precise trajectory tracking is critical for aerial robotics subject to wind disturbances. In this paper, we present a learning-based safety-preserving cascaded quadratic programming control (SPQC) for safe trajectory tracking under wind disturbances. The SPQC controller consists of a position-level controller and an attitude-level controller. Gaussian Processes (GPs) are utilized to estimate the uncertainties caused by wind disturbances, and then a nominal Lyapunov-based cascaded quadratic program (QP) controller is designed to track the reference trajectory. To avoid unexpected obstacles when tracking, safety constraints represented by control barrier functions (CBFs) are enforced on each nominal QP controller in a way of minimal modification. The performance of the proposed SPQC controller is illustrated through numerical validations of (a) trajectory tracking under different wind disturbances, and (b) trajectory tracking in a cluttered environment with a dense time-varying obstacle field under wind disturbances.
In this paper, a method is presented for lowering the energy consumption and/or increasing the speed of a standard manipulator spray painting a surface. The approach is based on the observation that a small angle between the spray direction and the surface normal does not affect the quality of the paint job. Recent results in set-based kinematic control are utilized to develop a switched control system, where this angle is defined as a set-based task with a maximum allowed limit. Four different set-based methods are implemented and tested on a UR5 manipulator from Universal Robots. Experimental results verify the correctness of the method, and demonstrate that the set-based approaches can substantially lower the paint time and energy consumption compared to the current standard solution.
Model predictive control (MPC) is an effective method for controlling robotic systems, particularly autonomous aerial vehicles such as quadcopters. However, application of MPC can be computationally demanding, and typically requires estimating the state of the system, which can be challenging in complex, unstructured environments. Reinforcement learning can in principle forego the need for explicit state estimation and acquire a policy that directly maps sensor readings to actions, but is difficult to apply to unstable systems that are liable to fail catastrophically during training before an effective policy has been found. We propose to combine MPC with reinforcement learning in the framework of guided policy search, where MPC is used to generate data at training time, under full state observations provided by an instrumented training environment. This data is used to train a deep neural network policy, which is allowed to access only the raw observations from the vehicles onboard sensors. After training, the neural network policy can successfully control the robot without knowledge of the full state, and at a fraction of the computational cost of MPC. We evaluate our method by learning obstacle avoidance policies for a simulated quadrotor, using simulated onboard sensors and no explicit state estimation at test time.
This paper presents a stochastic, model predictive control (MPC) algorithm that leverages short-term probabilistic forecasts for dispatching and rebalancing Autonomous Mobility-on-Demand systems (AMoD, i.e. fleets of self-driving vehicles). We first present the core stochastic optimization problem in terms of a time-expanded network flow model. Then, to ameliorate its tractability, we present two key relaxations. First, we replace the original stochastic problem with a Sample Average Approximation (SAA), and characterize the performance guarantees. Second, we separate the controller into two separate parts to address the task of assigning vehicles to the outstanding customers separate from that of rebalancing. This enables the problem to be solved as two totally unimodular linear programs, and thus easily scalable to large problem sizes. Finally, we test the proposed algorithm in two scenarios based on real data and show that it outperforms prior state-of-the-art algorithms. In particular, in a simulation using customer data from DiDi Chuxing, the algorithm presented here exhibits a 62.3 percent reduction in customer waiting time compared to state of the art non-stochastic algorithms.
Xunhua Dai
,Quan Quan
,Jinrui Ren
.
(2018)
.
"Terminal Iterative Learning Control for Autonomous Aerial Refueling under Aerodynamic Disturbances"
.
Xunhua Dai
هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا