Do you want to publish a course? Click here

Explainable Incipient Fault Detection Systems for Photovoltaic Panels

99   0   0.0 ( 0 )
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper presents an eXplainable Fault Detection and Diagnosis System (XFDDS) for incipient faults in PV panels. The XFDDS is a hybrid approach that combines the model-based and data-driven framework. Model-based FDD for PV panels lacks high fidelity models at low irradiance conditions for detecting incipient faults. To overcome this, a novel irradiance based three diode model (IB3DM) is proposed. It is a nine parameter model that provides higher accuracy even at low irradiance conditions, an important aspect for detecting incipient faults from noise. To exploit PV data, extreme gradient boosting (XGBoost) is used due to its ability to detecting incipient faults. Lack of explainability, feature variability for sample instances, and false alarms are challenges with data-driven FDD methods. These shortcomings are overcome by hybridization of XGBoost and IB3DM, and using eXplainable Artificial Intelligence (XAI) techniques. To combine the XGBoost and IB3DM, a fault-signature metric is proposed that helps reducing false alarms and also trigger an explanation on detecting incipient faults. To provide explainability, an eXplainable Artificial Intelligence (XAI) application is developed. It uses the local interpretable model-agnostic explanations (LIME) framework and provides explanations on classifier outputs for data instances. These explanations help field engineers/technicians for performing troubleshooting and maintenance operations. The proposed XFDDS is illustrated using experiments on different PV technologies and our results demonstrate the perceived benefits.



rate research

Read More

Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for todays artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system designers in their generation and prioritization of requirements and further help generate explanations that are better aligned to users and situational needs.
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
The scope of data-driven fault diagnosis models is greatly improved through deep learning (DL). However, the classical convolution and recurrent structure have their defects in computational efficiency and feature representation, while the latest Transformer architecture based on attention mechanism has not been applied in this field. To solve these problems, we propose a novel time-frequency Transformer (TFT) model inspired by the massive success of standard Transformer in sequence processing. Specially, we design a fresh tokenizer and encoder module to extract effective abstractions from the time-frequency representation (TFR) of vibration signals. On this basis, a new end-to-end fault diagnosis framework based on time-frequency Transformer is presented in this paper. Through the case studies on bearing experimental datasets, we constructed the optimal Transformer structure and verified the performance of the diagnostic method. The superiority of the proposed method is demonstrated in comparison with the benchmark model and other state-of-the-art methods.
Nowadays, multi-sensor technologies are applied in many fields, e.g., Health Care (HC), Human Activity Recognition (HAR), and Industrial Control System (ICS). These sensors can generate a substantial amount of multivariate time-series data. Unsupervised anomaly detection on multi-sensor time-series data has been proven critical in machine learning researches. The key challenge is to discover generalized normal patterns by capturing spatial-temporal correlation in multi-sensor data. Beyond this challenge, the noisy data is often intertwined with the training data, which is likely to mislead the model by making it hard to distinguish between the normal, abnormal, and noisy data. Few of previous researches can jointly address these two challenges. In this paper, we propose a novel deep learning-based anomaly detection algorithm called Deep Convolutional Autoencoding Memory network (CAE-M). We first build a Deep Convolutional Autoencoder to characterize spatial dependence of multi-sensor data with a Maximum Mean Discrepancy (MMD) to better distinguish between the noisy, normal, and abnormal data. Then, we construct a Memory Network consisting of linear (Autoregressive Model) and non-linear predictions (Bidirectional LSTM with Attention) to capture temporal dependence from time-series data. Finally, CAE-M jointly optimizes these two subnetworks. We empirically compare the proposed approach with several state-of-the-art anomaly detection methods on HAR and HC datasets. Experimental results demonstrate that our proposed model outperforms these existing methods.
With the growing capabilities of intelligent systems, the integration of artificial intelligence (AI) and robots in everyday life is increasing. However, when interacting in such complex human environments, the failure of intelligent systems, such as robots, can be inevitable, requiring recovery assistance from users. In this work, we develop automated, natural language explanations for failures encountered during an AI agents plan execution. These explanations are developed with a focus of helping non-expert users understand different point of failures to better provide recovery assistance. Specifically, we introduce a context-based information type for explanations that can both help non-expert users understand the underlying cause of a system failure, and select proper failure recoveries. Additionally, we extend an existing sequence-to-sequence methodology to automatically generate our context-based explanations. By doing so, we are able develop a model that can generalize context-based explanations over both different failure types and failure scenarios.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا