No Arabic abstract
The Proportional-Integral-Derivative Controller is widely used in industries for process control applications. Fractional-order PID controllers are known to outperform their integer-order counterparts. In this paper, we propose a new technique of fractional-order PID controller synthesis based on peak overshoot and rise-time specifications. Our approach is to construct an objective function, the optimization of which yields a possible solution to the design problem. This objective function is optimized using two popular bio-inspired stochastic search algorithms, namely Particle Swarm Optimization and Differential Evolution. With the help of a suitable example, the superiority of the designed fractional-order PID controller to an integer-order PID controller is affirmed and a comparative study of the efficacy of the two above algorithms in solving the optimization problem is also presented.
This contribution deals with identification of fractional-order dynamical systems. System identification, which refers to estimation of process parameters, is a necessity in control theory. Real processes are usually of fractional order as opposed to the ideal integral order models. A simple and elegant scheme of estimating the parameters for such a fractional order process is proposed. This method employs fractional calculus theory to find equations relating the parameters that are to be estimated, and then estimates the process parameters after solving the simultaneous equations. The said simultaneous equations are generated and updated using particle swarm optimization (PSO) technique, the fitness function being the sum of squared deviations from the actual set of observations. The data used for the calculations are intentionally corrupted to simulate real-life conditions. Results show that the proposed scheme offers a very high degree of accuracy even for erroneous data.
This contribution deals with identification of fractional-order dynamical systems. System identification, which refers to estimation of process parameters, is a necessity in control theory. Real processes are usually of fractional order as opposed to the ideal integral order models. A simple and elegant scheme of estimating the parameters for such a fractional order process is proposed. This method employs fractional calculus theory to find equations relating the parameters that are to be estimated, and then estimates the process parameters after solving the simultaneous equations. The data used for the calculations are intentionally corrupted to simulate real-life conditions. Results show that the proposed scheme offers a very high degree of accuracy even for erroneous data.
Of the many definitions for fractional order differintegral, the Grunwald-Letnikov definition is arguably the most important one. The necessity of this definition for the description and analysis of fractional order systems cannot be overstated. Unfortunately, the Fractional Order Differential Equation (FODE) describing such a systems, in its original form, highly sensitive to the effects of random noise components inevitable in a natural environment. Thus direct application of the definition in a real-life problem can yield erroneous results. In this article, we perform an in-depth mathematical analysis the Grunwald-Letnikov definition in depth and, as far as we know, we are the first to do so. Based on our analysis, we present a transformation scheme which will allow us to accurately analyze generalized fractional order systems in presence of significant quantities of random errors. Finally, by a simple experiment, we demonstrate the high degree of robustness to noise offered by the said transformation and thus validate our scheme.
In this paper, we consider a stochastic distributed nonconvex optimization problem with the cost function being distributed over $n$ agents having access only to zeroth-order (ZO) information of the cost. This problem has various machine learning applications. As a solution, we propose two distributed ZO algorithms, in which at each iteration each agent samples the local stochastic ZO oracle at two points with an adaptive smoothing parameter. We show that the proposed algorithms achieve the linear speedup convergence rate $mathcal{O}(sqrt{p/(nT)})$ for smooth cost functions and $mathcal{O}(p/(nT))$ convergence rate when the global cost function additionally satisfies the Polyak--Lojasiewicz (P--L) condition, where $p$ and $T$ are the dimension of the decision variable and the total number of iterations, respectively. To the best of our knowledge, this is the first linear speedup result for distributed ZO algorithms, which enables systematic processing performance improvements by adding more agents. We also show that the proposed algorithms converge linearly when considering deterministic centralized optimization problems under the P--L condition. We demonstrate through numerical experiments the efficiency of our algorithms on generating adversarial examples from deep neural networks in comparison with baseline and recently proposed centralized and distributed ZO algorithms.
To further understand the underlying mechanism of various reinforcement learning (RL) algorithms and also to better use the optimization theory to make further progress in RL, many researchers begin to revisit the linear-quadratic regulator (LQR) problem, whose setting is simple and yet captures the characteristics of RL. Inspired by this, this work is concerned with the model-free design of stochastic LQR controller for linear systems subject to Gaussian noises, from the perspective of both RL and primal-dual optimization. From the RL perspective, we first develop a new model-free off-policy policy iteration (MF-OPPI) algorithm, in which the sampled data is repeatedly used for updating the policy to alleviate the data-hungry problem to some extent. We then provide a rigorous analysis for algorithm convergence by showing that the involved iterations are equivalent to the iterations in the classical policy iteration (PI) algorithm. From the perspective of optimization, we first reformulate the stochastic LQR problem at hand as a constrained non-convex optimization problem, which is shown to have strong duality. Then, to solve this non-convex optimization problem, we propose a model-based primal-dual (MB-PD) algorithm based on the properties of the resulting Karush-Kuhn-Tucker (KKT) conditions. We also give a model-free implementation for the MB-PD algorithm by solving a transformed dual feasibility condition. More importantly, we show that the dual and primal update steps in the MB-PD algorithm can be interpreted as the policy evaluation and policy improvement steps in the PI algorithm, respectively. Finally, we provide one simulation example to show the performance of the proposed algorithms.