No Arabic abstract
A primary motivation for the development and implementation of structural health monitoring systems, is the prospect of gaining the ability to make informed decisions regarding the operation and maintenance of structures and infrastructure. Unfortunately, descriptive labels for measured data corresponding to health-state information for the structure of interest are seldom available prior to the implementation of a monitoring system. This issue limits the applicability of the traditional supervised and unsupervised approaches to machine learning in the development of statistical classifiers for decision-supporting SHM systems. The current paper presents a risk-based formulation of active learning, in which the querying of class-label information is guided by the expected value of said information for each incipient data point. When applied to structural health monitoring, the querying of class labels can be mapped onto the inspection of a structure of interest in order to determine its health state. In the current paper, the risk-based active learning process is explained and visualised via a representative numerical example and subsequently applied to the Z24 Bridge benchmark. The results of the case studies indicate that a decision-makers performance can be improved via the risk-based active learning of a statistical classifier, such that the decision process itself is taken into account.
Obtaining the ability to make informed decisions regarding the operation and maintenance of structures, provides a major incentive for the implementation of structural health monitoring (SHM) systems. Probabilistic risk assessment (PRA) is an established methodology that allows engineers to make risk-informed decisions regarding the design and operation of safety-critical and high-value assets in industries such as nuclear and aerospace. The current paper aims to formulate a risk-based decision framework for structural health monitoring that combines elements of PRA with the existing SHM paradigm. As an apt tool for reasoning and decision-making under uncertainty, probabilistic graphical models serve as the foundation of the framework. The framework involves modelling failure modes of structures as Bayesian network representations of fault trees and then assigning costs or utilities to the failure events. The fault trees allow for information to pass from probabilistic classifiers to influence diagram representations of decision processes whilst also providing nodes within the graphical model that may be queried to obtain marginal probability distributions over local damage states within a structure. Optimal courses of action for structures are selected by determining the strategies that maximise expected utility. The risk-based framework is demonstrated on a realistic truss-like structure and supported by experimental data. Finally, a discussion of the risk-based approach is made and further challenges pertaining to decision-making processes in the context of SHM are identified.
Motivated by the need for efficient and personalized learning in mobile health, we investigate the problem of online kernel selection for Gaussian Process regression in the multi-task setting. We propose a novel generative process on the kernel composition for this purpose. Our method demonstrates that trajectories of kernel evolutions can be transferred between users to improve learning and that the kernels themselves are meaningful for an mHealth prediction goal.
We present a new active learning algorithm that adaptively partitions the input space into a finite number of regions, and subsequently seeks a distinct predictor for each region, both phases actively requesting labels. We prove theoretical guarantees for both the generalization error and the label complexity of our algorithm, and analyze the number of regions defined by the algorithm under some mild assumptions. We also report the results of an extensive suite of experiments on several real-world datasets demonstrating substantial empirical benefits over existing single-region and non-adaptive region-based active learning baselines.
The advancement of machine learning algorithms has opened a wide scope for vibration-based SHM (Structural Health Monitoring). Vibration-based SHM is based on the fact that damage will alter the dynamic properties viz., structural response, frequencies, mode shapes, etc of the structure. The responses measured using sensors, which are high dimensional in nature, can be intelligently analyzed using machine learning techniques for damage assessment. Neural networks employing multilayer architectures are expressive models capable of capturing complex relationships between input-output pairs but do not account for uncertainty in network outputs. A BNN (Bayesian Neural Network) refers to extending standard networks with posterior inference. It is a neural network with a prior distribution on its weights. Deep learning architectures like CNN (Convolutional neural network) and LSTM(Long Short Term Memory) are good candidates for representation learning from high dimensional data. The advantage of using CNN over multi-layer neural networks is that they are good feature extractors as well as classifiers, which eliminates the need for generating hand-engineered features. LSTM networks are mainly used for sequence modeling. This paper presents both a Bayesian multi-layer perceptron and deep learning-based approach for damage detection and location identification in beam-like structures. Raw frequency response data simulated using finite element analysis is fed as the input of the network. As part of this, frequency response was generated for a series of simulations in the cantilever beam involving different damage scenarios. This case study shows the effectiveness of the above approaches to predict bending rigidity with an acceptable error rate.
Inherent risk scoring is an important function in anti-money laundering, used for determining the riskiness of an individual during onboarding $textit{before}$ fraudulent transactions occur. It is, however, often fraught with two challenges: (1) inconsistent notions of what constitutes as high or low risk by experts and (2) the lack of labeled data. This paper explores a new paradigm of data labeling and data collection to tackle these issues. The data labeling is choice-based; the expert does not provide an absolute risk score but merely chooses the most/least risky example out of a small choice set, which reduces inconsistency because experts make only relative judgments of risk. The data collection is synthetic; examples are crafted using optimal experimental design methods, obviating the need for real data which is often difficult to obtain due to regulatory concerns. We present the methodology of an end-to-end inherent risk scoring algorithm that we built for a large financial institution. The system was trained on a small set of synthetic data (188 examples, 24 features) whose labels are obtained via the choice-based paradigm using an efficient number of expert labelers. The system achieves 89% accuracy on a test set of 52 examples, with an area under the ROC curve of 93%.