Do you want to publish a course? Click here

Performance Bounds for Neural Network Estimators: Applications in Fault Detection

100   0   0.0 ( 0 )
 Added by Navid Hashemi
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We exploit recent results in quantifying the robustness of neural networks to input variations to construct and tune a model-based anomaly detector, where the data-driven estimator model is provided by an autoregressive neural network. In tuning, we specifically provide upper bounds on the rate of false alarms expected under normal operation. To accomplish this, we provide a theory extension to allow for the propagation of multiple confidence ellipsoids through a neural network. The ellipsoid that bounds the output of the neural network under the input variation informs the sensitivity - and thus the threshold tuning - of the detector. We demonstrate this approach on a linear and nonlinear dynamical system.

rate research

Read More

With the rise of smartphones and the internet-of-things, data is increasingly getting generated at the edge on local, personal devices. For privacy, latency and energy saving reasons, this shift is causing machine learning algorithms to move towards decentralisation with the data and algorithms stored, and even trained, locally on devices. The device hardware becomes the main bottleneck for model capability in this set-up, creating a need for slimmed down, more efficient neural networks. Neural network pruning and quantisation are two methods that have been developed for this, with both approaches demonstrating impressive results in reducing the computational cost without sacrificing significantly on model performance. However, the understanding behind these reduction methods remains underdeveloped. To address this issue, a semi-definite program is introduced to bound the worst-case error caused by pruning or quantising a neural network. The method can be applied to many neural network structures and nonlinear activation functions with the bounds holding robustly for all inputs in specified sets. It is hoped that the computed bounds will provide certainty to the performance of these algorithms when deployed on safety-critical systems.
This paper deals with the fault detection and isolation (FDI) problem for linear structured systems in which the system matrices are given by zero/nonzero/arbitrary pattern matrices. In this paper, we follow a geometric approach to verify solvability of the FDI problem for such systems. To do so, we first develop a necessary and sufficient condition under which the FDI problem for a given particular linear time-invariant system is solvable. Next, we establish a necessary condition for solvability of the FDI problem for linear structured systems. In addition, we develop a sufficient algebraic condition for solvability of the FDI problem in terms of a rank test on an associated pattern matrix. To illustrate that this condition is not necessary, we provide a counterexample in which the FDI problem is solvable while the condition is not satisfied. Finally, we develop a graph-theoretic condition for the full rank property of a given pattern matrix, which leads to a graph-theoretic condition for solvability of the FDI problem.
143 - Pio Ong , Jorge Cortes 2021
This paper proposes a novel framework for resource-aware control design termed performance-barrier-based triggering. Given a feedback policy, along with a Lyapunov function certificate that guarantees its correctness, we examine the problem of designing its digital implementation through event-triggered control while ensuring a prescribed performance is met and triggers occur as sparingly as possible. Our methodology takes into account the performance residual, i.e., how well the system is doing in regards to the prescribed performance. Inspired by the notion of control barrier function, the trigger design allows the certificate to deviate from monotonically decreasing, with leeway specified as an increasing function of the performance residual, resulting in greater flexibility in prescribing update times. We study different types of performance specifications, with particular attention to quantifying the benefits of the proposed approach in the exponential case. We build on this to design intrinsically Zeno-free distributed triggers for network systems. A comparison of event-triggered approaches in a vehicle platooning problem shows how the proposed design meets the prescribed performance with a significantly lower number of controller updates.
The rapid growth of distributed energy resources potentially increases power grid instability. One promising strategy is to employ data in power grids to efficiently respond to abnormal events (e.g., faults) by detection and location. Unfortunately, most existing works lack physical interpretation and are vulnerable to the practical challenges: sparse observation, insufficient labeled datasets, and stochastic environment. We propose a physics-informed graph learning framework of two stages to handle these challenges when locating faults. Stage- I focuses on informing a graph neural network (GNN) with the geometrical structure of power grids; stage-II employs the physical similarity of labeled and unlabeled data samples to improve the location accuracy. We provide a random walk-based the underpinning of designing our GNNs to address the challenge of sparse observation and augment the correct prediction probability. We compare our approach with three baselines in the IEEE 123-node benchmark system, showing that the proposed method outperforms the others by significant margins, especially when label rates are low. Also, we validate the robustness of our algorithms to out-of-distribution-data (ODD) due to topology changes and load variations. Additionally, we adapt our graph learning framework to the IEEE 37-node test feeder and show high location performance with the proposed training strategy.
Even though model predictive control (MPC) is currently the main algorithm for insulin control in the artificial pancreas (AP), it usually requires complex online optimizations, which are infeasible for resource-constrained medical devices. MPC also typically relies on state estimation, an error-prone process. In this paper, we introduce a novel approach to AP control that uses Imitation Learning to synthesize neural-network insulin policies from MPC-computed demonstrations. Such policies are computationally efficient and, by instrumenting MPC at training time with full state information, they can directly map measurements into optimal therapy decisions, thus bypassing state estimation. We apply Bayesian inference via Monte Carlo Dropout to learn policies, which allows us to quantify prediction uncertainty and thereby derive safer therapy decisions. We show that our control policies trained under a specific patient model readily generalize (in terms of model parameters and disturbance distributions) to patient cohorts, consistently outperforming traditional MPC with state estimation.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا