Do you want to publish a course? Click here

In-flight range optimization of multicopters using multivariable extremum seeking with adaptive step size

83   0   0.0 ( 0 )
 Added by Xiangyu Wu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Limited flight range is a common problem for multicopters. To alleviate this problem, we propose a method for finding the optimal speed and heading of a multicopter when flying a given path to achieve the longest flight range. Based on a novel multivariable extremum seeking controller with adaptive step size, the method (a) does not require any power consumption model of the vehicle, (b) can adapt to unknown disturbances, (c) can be executed online, and (d) converges faster than the standard extremum seeking controller with constant step size. We conducted indoor experiments to validate the effectiveness of this method under different payloads and initial conditions, and showed that it is able to converge more than 30% faster than the standard extremum seeking controller. This method is especially useful for applications such as package delivery, where the size and weight of the payload differ for different deliveries and the power consumption of the vehicle is hard to model.

rate research

Read More

In an active power distribution system, Volt-VAR optimization (VVO) methods are employed to achieve network-level objectives such as minimization of network power losses. The commonly used model-based centralized and distributed VVO algorithms perform poorly in the absence of a communication system and with model and measurement uncertainties. In this paper, we proposed a model-free local Volt-VAR control approach for network-level optimization that does not require communication with other decision-making agents. The proposed algorithm is based on extremum-seeking approach that uses only local measurements to minimize the network power losses. To prove that the proposed extremum-seeking controller converges to the optimum solution, we also derive mathematical conditions for which the loss minimization problem is convex with respect to the control variables. Local controllers pose stability concerns during highly variable scenarios. Thus, the proposed extremum-seeking controller is integrated with an adaptive-droop control module to provide a stable local control response. The proposed approach is validated using IEEE 4-bus and IEEE 123-bus systems and achieves the loss minimization objective while maintaining the voltage within the pre-specific limits even during highly variable DER generation scenarios.
The autoencoder model uses an encoder to map data samples to a lower dimensional latent space and then a decoder to map the latent space representations back to the data space. Implicitly, it relies on the encoder to approximate the inverse of the decoder network, so that samples can be mapped to and back from the latent space faithfully. This approximation may lead to sub-optimal latent space representations. In this work, we investigate a decoder-only method that uses gradient flow to encode data samples in the latent space. The gradient flow is defined based on a given decoder and aims to find the optimal latent space representation for any given sample through optimisation, eliminating the need of an approximate inversion through an encoder. Implementing gradient flow through ordinary differential equations (ODE), we leverage the adjoint method to train a given decoder. We further show empirically that the costly integrals in the adjoint method may not be entirely necessary. Additionally, we propose a $2^{nd}$ order ODE variant to the method, which approximates Nesterovs accelerated gradient descent, with faster convergence per iteration. Commonly used ODE solvers can be quite sensitive to the integration step-size depending on the stiffness of the ODE. To overcome the sensitivity for gradient flow encoding, we use an adaptive solver that prioritises minimising loss at each integration step. We assess the proposed method in comparison to the autoencoding model. In our experiments, GFE showed a much higher data-efficiency than the autoencoding model, which can be crucial for data scarce applications.
We introduce a new class of extremum seeking controllers able to achieve fixed time convergence to the solution of optimization problems defined by static and dynamical systems. Unlike existing approaches in the literature, the convergence time of the proposed algorithms does not depend on the initial conditions and it can be prescribed a priori by tuning the parameters of the controller. Specifically, our first contribution is a novel gradient-based extremum seeking algorithm for cost functions that satisfy the Polyak-Lojasiewicz (PL) inequality with some coefficient kappa > 0, and for which the extremum seeking controller guarantees a fixed upper bound on the convergence time that is independent of the initial conditions but dependent on the coefficient kappa. Second, in order to remove the dependence on kappa, we introduce a novel Newton-based extremum seeking algorithm that guarantees a fully assignable fixed upper bound on the convergence time, thus paralleling existing asymptotic results in Newton-based extremum seeking where the rate of convergence is fully assignable. Finally, we study the problem of optimizing dynamical systems, where the cost function corresponds to the steady-state input-to-output map of a stable but unknown dynamical system. In this case, after a time scale transformation is performed, the proposed extremum seeking controllers achieve the same fixed upper bound on the convergence time as in the static case. Our results exploit recent gradient flow structures proposed by Garg and Panagou in [3], and are established by using averaging theory and singular perturbation theory for dynamical systems that are not necessarily Lipschitz continuous. We confirm the validity of our results via numerical simulations that illustrate the key advantages of the extremum seeking controllers presented in this paper.
In this paper, a combined formation acquisition and cooperative extremum seeking control scheme is proposed for a team of three robots moving on a plane. The extremum seeking task is to find the maximizer of an unknown two-dimensional function on the plane. The function represents the signal strength field due to a source located at maximizer, and is assumed to be locally concave around maximizer and monotonically decreasing in distance to the source location. Taylor expansions of the field function at the location of a particular lead robot and the maximizer are used together with a gradient estimator based on signal strength measurements of the robots to design and analyze the proposed control scheme. The proposed scheme is proven to exponentially and simultaneously (i) acquire the specified geometric formation and (ii) drive the lead robot to a specified neighborhood disk around maximizer, whose radius depends on the specified desired formation size as well as the norm bounds of the Hessian of the field function. The performance of the proposed control scheme is evaluated using a set of simulation experiments.
In this paper, we present a novel Newton-based extremum seeking controller for the solution of multivariable model-free optimization problems in static maps. Unlike existing asymptotic and fixed-time results in the literature, we present a scheme that achieves (practical) fixed time convergence to a neighborhood of the optimal point, with a convergence time that is independent of the initial conditions and the Hessian of the cost function, and therefore can be arbitrarily assigned a priori by the designer via an appropriate choice of parameters in the algorithm. The extremum seeking dynamics exploit a class of fixed time convergence properties recently established in the literature for a family of Newton flows, as well as averaging results for perturbed dynamical systems that are not necessarily Lipschitz continuous. The proposed extremum seeking algorithm is model-free and does not require any explicit knowledge of the gradient and Hessian of the cost function. Instead, real-time optimization with fixed-time convergence is achieved by using real time measurements of the cost, which is perturbed by a suitable class of periodic excitation signals generated by a dynamic oscillator. Numerical examples illustrate the performance of the algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا