ترغب بنشر مسار تعليمي؟ اضغط هنا

Minimum Structural Sensor Placement for Switched Linear Time-Invariant Systems and Unknown Inputs

192   0   0.0 ( 0 )
 نشر من قبل Guilherme Ramos
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we study the structural state and input observability of continuous-time switched linear time-invariant systems and unknown inputs. First, we provide necessary and sufficient conditions for their structural state and input observability that can be efficiently verified in $O((m(n+p))^2)$, where $n$ is the number of state variables, $p$ is the number of unknown inputs, and $m$ is the number of modes. Moreover, we address the minimum sensor placement problem for these systems by adopting a feed-forward analysis and by providing an algorithm with a computational complexity of $ O((m(n+p)+alpha)^{2.373})$, where $alpha$ is the number of target strongly connected components of the systems digraph representation. Lastly, we explore different assumptions on both the system and unknown inputs (latent space) dynamics that add more structure to the problem, and thereby, enable us to render algorithms with lower computational complexity, which are suitable for implementation in large-scale systems.



قيم البحث

اقرأ أيضاً

We study the synthesis of mode switching protocols for a class of discrete-time switched linear systems in which the mode jumps are governed by Markov decision processes (MDPs). We call such systems MDP-JLS for brevity. Each state of the MDP correspo nds to a mode in the switched system. The probabilistic state transitions in the MDP represent the mode transitions. We focus on finding a policy that selects the switching actions at each mode such that the switched system that follows these actions is guaranteed to be stable. Given a policy in the MDP, the considered MDP-JLS reduces to a Markov jump linear system (MJLS). {We consider both mean-square stability and stability with probability one. For mean-square stability, we leverage existing stability conditions for MJLSs and propose efficient semidefinite programming formulations to find a stabilizing policy in the MDP. For stability with probability one, we derive new sufficient conditions and compute a stabilizing policy using linear programming. We also extend the policy synthesis results to MDP-JLS with uncertain mode transition probabilities.
This paper proposes a joint input and state dynamic estimation scheme for power networks in microgrids and active distribution systems with unknown inputs. The conventional dynamic state estimation of power networks in the transmission system relies on the forecasting methods to obtain the state-transition model of state variables. However, under highly dynamic conditions in the operation of microgrids and active distribution networks, this approach may become ineffective as the forecasting accuracy is not guaranteed. To overcome such drawbacks, this paper employs the power networks model derived from the physical equations of branch currents. Specifically, the power network model is a linear state-space model, in which the state vector consists of branch currents, and the input vector consists of bus voltages. To estimate both state and input variables, we propose linear Kalman-based dynamic filtering algorithms in batch-mode regression form, considering the cross-correlation between states and inputs. For the scalability of the proposed scheme, the distributed implementation is also presented. Complementarily, the predicted state and input vectors are leveraged for bad data detection. Results carried out on a 13-bus microgrid system in real-time Opal-RT platform demonstrate the effectiveness of the proposed method in comparison with the traditional weighted least square and tracking state estimation methods.
In this paper, we first propose a method that can efficiently compute the maximal robust controlled invariant set for discrete-time linear systems with pure delay in input. The key to this method is to construct an auxiliary linear system (without de lay) with the same state-space dimension of the original system in consideration and to relate the maximal invariant set of the auxiliary system to that of the original system. When the system is subject to disturbances, guaranteeing safety is harder for systems with input delays. Ability to incorporate any additional information about the disturbance becomes more critical in these cases. Motivated by this observation, in the second part of the paper, we generalize the proposed method to take into account additional preview information on the disturbances, while maintaining computational efficiency. Compared with the naive approach of constructing a higher dimensional system by appending the state-space with the delayed inputs and previewed disturbances, the proposed approach is demonstrated to scale much better with the increasing delay time.
Iterative trajectory optimization techniques for non-linear dynamical systems are among the most powerful and sample-efficient methods of model-based reinforcement learning and approximate optimal control. By leveraging time-variant local linear-quad ratic approximations of system dynamics and reward, such methods can find both a target-optimal trajectory and time-variant optimal feedback controllers. However, the local linear-quadratic assumptions are a major source of optimization bias that leads to catastrophic greedy updates, raising the issue of proper regularization. Moreover, the approximate models disregard for any physical state-action limits of the system causes further aggravation of the problem, as the optimization moves towards unreachable areas of the state-action space. In this paper, we address the issue of constrained systems in the scenario of online-fitted stochastic linear dynamics. We propose modeling state and action physical limits as probabilistic chance constraints linear in both state and action and introduce a new trajectory optimization technique that integrates these probabilistic constraints by optimizing a relaxed quadratic program. Our empirical evaluations show a significant improvement in learning robustness, which enables our approach to perform more effective updates and avoid premature convergence observed in state-of-the-art algorithms.
State estimation is critical to control systems, especially when the states cannot be directly measured. This paper presents an approximate optimal filter, which enables to use policy iteration technique to obtain the steady-state gain in linear Gaus sian time-invariant systems. This design transforms the optimal filtering problem with minimum mean square error into an optimal control problem, called Approximate Optimal Filtering (AOF) problem. The equivalence holds given certain conditions about initial state distributions and policy formats, in which the system state is the estimation error, control input is the filter gain, and control objective function is the accumulated estimation error. We present a policy iteration algorithm to solve the AOF problem in steady-state. A classic vehicle state estimation problem finally evaluates the approximate filter. The results show that the policy converges to the steady-state Kalman gain, and its accuracy is within 2 %.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا