ترغب بنشر مسار تعليمي؟ اضغط هنا

Convex Optimization In Identification Of Stable Non-Linear State Space Models

138   0   0.0 ( 0 )
 نشر من قبل Mark Tobenkin Mr.
 تاريخ النشر 2010
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

A new framework for nonlinear system identification is presented in terms of optimal fitting of stable nonlinear state space equations to input/output/state data, with a performance objective defined as a measure of robustness of the simulation error with respect to equation errors. Basic definitions and analytical results are presented. The utility of the method is illustrated on a simple simulation example as well as experimental recordings from a live neuron.



قيم البحث

اقرأ أيضاً

This paper introduces new techniques for using convex optimization to fit input-output data to a class of stable nonlinear dynamical models. We present an algorithm that guarantees consistent estimates of models in this class when a small set of repe ated experiments with suitably independent measurement noise is available. Stability of the estimated models is guaranteed without any assumptions on the input-output data. We first present a convex optimization scheme for identifying stable state-space models from empirical moments. Next, we provide a method for using repeated experiments to remove the effect of noise on these moment and model estimates. The technique is demonstrated on a simple simulated example.
We propose a convex optimization procedure for black-box identification of nonlinear state-space models for systems that exhibit stable limit cycles (unforced periodic solutions). It extends the robust identification error framework in which a convex upper bound on simulation error is optimized to fit rational polynomial models with a strong stability guarantee. In this work, we relax the stability constraint using the concepts of transverse dynamics and orbital stability, thus allowing systems with autonomous oscillations to be identified. The resulting optimization problem is convex, and can be formulated as a semidefinite program. A simulation-error bound is proved without assuming that the true system is in the model class, or that the number of measurements goes to infinity. Conditions which guarantee existence of a unique limit cycle of the model are proved and related to the model class that we search over. The method is illustrated by identifying a high-fidelity model from experimental recordings of a live rat hippocampal neuron in culture.
The identification of structured state-space model has been intensively studied for a long time but still has not been adequately addressed. The main challenge is that the involved estimation problem is a non-convex (or bilinear) optimization problem . This paper is devoted to developing an identification method which aims to find the global optimal solution under mild computational burden. Key to the developed identification algorithm is to transform a bilinear estimation to a rank constrained optimization problem and further a difference of convex programming (DCP) problem. The initial condition for the DCP problem is obtained by solving its convex part of the optimization problem which happens to be a nuclear norm regularized optimization problem. Since the nuclear norm regularized optimization is the closest convex form of the low-rank constrained estimation problem, the obtained initial condition is always of high quality which provides the DCP problem a good starting point. The DCP problem is then solved by the sequential convex programming method. Finally, numerical examples are included to show the effectiveness of the developed identification algorithm.
Gray-box identification is prevalent in modeling physical and networked systems. However, due to the non-convex nature of the gray-box identification problem, good initial parameter estimates are crucial for a successful application. In this paper, a new identification method is proposed by exploiting the low-rank and structured Hankel matrix of impulse response. This identification problem is recasted into a difference-of-convex programming problem, which is then solved by the sequential convex programming approach with the associated initialization obtained by nuclear-norm optimization. The presented method aims to achieve the maximum impulse-response fitting while not requiring additional (non-convex) conditions to secure non-singularity of the similarity transformation relating the given state-space matrices to the gray-box parameterized ones. This overcomes a persistent shortcoming in a number of recent contributions on this topic, and the new method can be applied for the structured state-space realization even if the involved system parameters are unidentifiable. The method can be used both for directly estimating the gray-box parameters and for providing initial parameter estimates for further iterative search in a conventional gray-box identification setup.
In this paper we propose several adaptive gradient methods for stochastic optimization. Unlike AdaGrad-type of methods, our algorithms are based on Armijo-type line search and they simultaneously adapt to the unknown Lipschitz constant of the gradien t and variance of the stochastic approximation for the gradient. We consider an accelerated and non-accelerated gradient descent for convex problems and gradient descent for non-convex problems. In the experiments we demonstrate superiority of our methods to existing adaptive methods, e.g. AdaGrad and Adam.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا