ترغب بنشر مسار تعليمي؟ اضغط هنا

Automation is becoming ubiquitous in all laboratory activities, leading towards precisely defined and codified laboratory protocols. However, the integration between laboratory protocols and mathematical models is still lacking. Models describe physi cal processes, while protocols define the steps carried out during an experiment: neither cover the domain of the other, although they both attempt to characterize the same phenomena. We should ideally start from an integrated description of both the model and the steps carried out to test it, to concurrently analyze uncertainties in model parameters, equipment tolerances, and data collection. To this end, we present a language to model and optimize experimental biochemical protocols that facilitates such an integrated description, and that can be combined with experimental data. We provide a probabilistic semantics for our language based on a Bayesian interpretation that formally characterizes the uncertainties in both the data collection, the underlying model, and the protocol operations. On a set of case studies we illustrate how the resulting framework allows for automated analysis and optimization of experimental protocols, including Gibson assembly protocols.
We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models. Specifically, we leverage bound propagation techniques and backward recursion to compute lower bounds for the pro bability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states. We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies, as well as to synthesize control policies that improve the certification bounds. On a set of benchmarks, we demonstrate that our framework can be employed to certify policies over BNNs predictions for problems of more than $10$ dimensions, and to effectively synthesize policies that significantly increase the lower bound on the satisfaction probability.
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications. Such scenarios demand that GP decisions are not only accurate, but also robust to perturbations. In this paper we pr esent a framework to analyse adversarial robustness of GPs, defined as invariance of the models decision to bounded perturbations. Given a compact subset of the input space $Tsubseteq mathbb{R}^d$, a point $x^*$ and a GP, we provide provable guarantees of adversarial robustness of the GP by computing lower and upper bounds on its prediction range in $T$. We develop a branch-and-bound scheme to refine the bounds and show, for any $epsilon > 0$, that our algorithm is guaranteed to converge to values $epsilon$-close to the actual values in finitely many iterations. The algorithm is anytime and can handle both regression and classification tasks, with analytical formulation for most kernels used in practice. We evaluate our methods on a collection of synthetic and standard benchmark datasets, including SPAM, MNIST and FashionMNIST. We study the effect of approximate inference techniques on robustness and demonstrate how our method can be used for interpretability. Our empirical results suggest that the adversarial robustness of GPs increases with accurate posterior estimation.
106 - John Jackson 2021
We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems. The properties of the system are specified using linear temporal logic (LTL) over finite traces (LTLf), which is as expressive as LTL and enabl es interpretations over finite behaviors. The framework first learns the unknown dynamics via Gaussian process regression. Then, it builds a formal abstraction of the switched system in terms of an uncertain Markov model, namely an Interval Markov Decision Process (IMDP), by accounting for both the stochastic behavior of the system and the uncertainty in the learning step. Then, we synthesize a strategy on the resulting IMDP that maximizes the satisfaction probability of the LTLf specification and is robust against all the uncertainties in the abstraction. This strategy is then refined into a switching strategy for the original stochastic system. We show that this strategy is near-optimal and provide a bound on its distance (error) to the optimal strategy. We experimentally validate our framework on various case studies, including both linear and non-linear switched stochastic systems.
Recently, there have been efforts towards understanding the sampling behaviour of event-triggered control (ETC), for obtaining metrics on its sampling performance and predicting its sampling patterns. Finite-state abstractions, capturing the sampling behaviour of ETC systems, have proven promising in this respect. So far, such abstractions have been constructed for non-stochastic systems. Here, inspired by this framework, we abstract the sampling behaviour of stochastic narrow-sense linear periodic ETC (PETC) systems via Interval Markov Chains (IMCs). Particularly, we define functions over sequences of state-measurements and interevent times that can be expressed as discounted cumulative sums of rewards, and compute bounds on their expected values by constructing appropriate IMCs and equipping them with suitable rewards. Finally, we argue that our results are extendable to more general forms of functions, thus providing a generic framework to define and study various ETC sampling indicators.
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees. We rely on techniques from con straint relaxation of non-convex optimisation problems and modify the standard cross-entropy error model to enforce posterior robustness to worst-case perturbations in $epsilon$-balls around input points. We illustrate how the resulting framework can be combined with methods commonly employed for approximate inference of BNNs. In an empirical investigation, we demonstrate that the presented approach enables training of certifiably robust models on MNIST, FashionMNIST and CIFAR-10 and can also be beneficial for uncertainty calibration. Our method is the first to directly train certifiable BNNs, thus facilitating their deployment in safety-critical applications.
The existence of adversarial examples underscores the importance of understanding the robustness of machine learning models. Bayesian neural networks (BNNs), due to their calibrated uncertainty, have been shown to posses favorable adversarial robustn ess properties. However, when approximate Bayesian inference methods are employed, the adversarial robustness of BNNs is still not well understood. In this work, we employ gradient-free optimization methods in order to find adversarial examples for BNNs. In particular, we consider genetic algorithms, surrogate models, as well as zeroth order optimization methods and adapt them to the goal of finding adversarial examples for BNNs. In an empirical evaluation on the MNIST and Fashion MNIST datasets, we show that for various approximate Bayesian inference methods the usage of gradient-free algorithms can greatly improve the rate of finding adversarial examples compared to state-of-the-art gradient-based methods.
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations. Given a compact set of input points, $T subseteq mathbb{R}^m$, we study the probability w.r.t. the BNN posterior that all the points in $T$ are m apped to the same region $S$ in the output space. In particular, this can be used to evaluate the probability that a network sampled from the BNN is vulnerable to adversarial attacks. We rely on relaxation techniques from non-convex optimization to develop a method for computing a lower bound on probabilistic safety for BNNs, deriving explicit procedures for the case of interval and linear function propagation techniques. We apply our methods to BNNs trained on a regression task, airborne collision avoidance, and MNIST, empirically showing that our approach allows one to certify probabilistic safety of BNNs with millions of parameters.
87 - John Jackson 2020
The deployment of autonomous systems that operate in unstructured environments necessitates algorithms to verify their safety. This can be challenging due to, e.g., black-box components in the control software, or undermodelled dynamics that prevent model-based verification. We present a novel verification framework for an unknown dynamical system from a given set of noisy observations of the dynamics. Using Gaussian processes trained on this data set, the framework abstracts the system as an uncertain Markov process with discrete states defined over the safe set. The transition bounds of the abstraction are derived from the probabilistic error bounds between the regression and underlying system. An existing approach for verifying safety properties over uncertain Markov processes then generates safety guarantees. We demonstrate the versatility of the framework on several examples, including switched and nonlinear systems.
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications. Despite significant efforts, both practical and theoretical, the problem remains open. In this paper, we analyse th e geometry of adversarial attacks in the large-data, overparametrized limit for Bayesian Neural Networks (BNNs). We show that, in the limit, vulnerability to gradient-based attacks arises as a result of degeneracy in the data distribution, i.e., when the data lies on a lower-dimensional submanifold of the ambient space. As a direct consequence, we demonstrate that in the limit BNN posteriors are robust to gradient-based adversarial attacks. Experimental results on the MNIST and Fashion MNIST datasets with BNNs trained with Hamiltonian Monte Carlo and Variational Inference support this line of argument, showing that BNNs can display both high accuracy and robustness to gradient based adversarial attacks.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا