Do you want to publish a course? Click here

Deep Learning on Key Performance Indicators for Predictive Maintenance in SAP HANA

116   0   0.0 ( 0 )
 Added by Jongyoon Song
 Publication date 2018
  fields
and research's language is English




Ask ChatGPT about the research

With a new era of cloud and big data, Database Management Systems (DBMSs) have become more crucial in numerous enterprise business applications in all the industries. Accordingly, the importance of their proactive and preventive maintenance has also increased. However, detecting problems by predefined rules or stochastic modeling has limitations, particularly when analyzing the data on high-dimensional Key Performance Indicators (KPIs) from a DBMS. In recent years, Deep Learning (DL) has opened new opportunities for this complex analysis. In this paper, we present two complementary DL approaches to detect anomalies in SAP HANA. A temporal learning approach is used to detect abnormal patterns based on unlabeled historical data, whereas a spatial learning approach is used to classify known anomalies based on labeled data. We implement a system in SAP HANA integrated with Google TensorFlow. The experimental results with real-world data confirm the effectiveness of the system and models.

rate research

Read More

This paper provides an economic perspective on the predictive maintenance of filtration units. The rise of predictive maintenance is possible due to the growing trend of industry 4.0 and the availability of inexpensive sensors. However, the adoption rate for predictive maintenance by companies remains low. The majority of companies are sticking to corrective and preventive maintenance. This is not due to a lack of information on the technical implementation of predictive maintenance, with an abundance of research papers on state-of-the-art machine learning algorithms that can be used effectively. The main issue is that most upper management has not yet been fully convinced of the idea of predictive maintenance. The economic value of the implementation has to be linked to the predictive maintenance program for better justification by the management. In this study, three machine learning models were trained to demonstrate the economic value of predictive maintenance. Data was collected from a testbed located at the Singapore University of Technology and Design. The testbed closely resembles a real-world water treatment plant. A cost-benefit analysis coupled with Monte Carlo simulation was proposed. It provided a structured approach to document potential costs and savings by implementing a predictive maintenance program. The simulation incorporated real-world risk into a financial model. Financial figures were adapted from CITIC Envirotech Ltd, a leading membrane-based integrated environmental solutions provider. Two scenarios were used to elaborate on the economic values of predictive maintenance. Overall, this study seeks to bridge the gap between technical and business domains of predictive maintenance.
Internet of Things (IoT) with its growing number of deployed devices and applications raises significant challenges for network maintenance procedures. In this work, we formulate a problem of autonomous maintenance in IoT networks as a Partially Observable Markov Decision Process. Subsequently, we utilize Deep Reinforcement Learning algorithms (DRL) to train agents that decide if a maintenance procedure is in order or not and, in the former case, the proper type of maintenance needed. To avoid wasting the scarce resources of IoT networks we utilize the Age of Information (AoI) metric as a reward signal for the training of the smart agents. AoI captures the freshness of the sensory data which are transmitted by the IoT sensors as part of their normal service provision. Numerical results indicate that AoI integrates enough information about the past and present states of the system to be successfully used in the training of smart agents for the autonomous maintenance of the network.
Crucial for building trust in deep learning models for critical real-world applications is efficient and theoretically sound uncertainty quantification, a task that continues to be challenging. Useful uncertainty information is expected to have two key properties: It should be valid (guaranteeing coverage) and discriminative (more uncertain when the expected risk is high). Moreover, when combined with deep learning (DL) methods, it should be scalable and affect the DL model performance minimally. Most existing Bayesian methods lack frequentist coverage guarantees and usually affect model performance. The few available frequentist methods are rarely discriminative and/or violate coverage guarantees due to unrealistic assumptions. Moreover, many methods are expensive or require substantial modifications to the base neural network. Building upon recent advances in conformal prediction [13, 31] and leveraging the classical idea of kernel regression, we propose Locally Valid and Discriminative predictive intervals (LVD), a simple, efficient, and lightweight method to construct discriminative predictive intervals (PIs) for almost any DL model. With no assumptions on the data distribution, such PIs also offer finite-sample local coverage guarantees (contrasted to the simpler marginal coverage). We empirically verify, using diverse datasets, that besides being the only locally valid method, LVD also exceeds or matches the performance (including coverage rate and prediction accuracy) of existing uncertainty quantification methods, while offering additional benefits in scalability and flexibility.
We propose Deep Autoencoding Predictive Components (DAPC) -- a self-supervised representation learning method for sequence data, based on the intuition that useful representations of sequence data should exhibit a simple structure in the latent space. We encourage this latent structure by maximizing an estimate of predictive information of latent feature sequences, which is the mutual information between past and future windows at each time step. In contrast to the mutual information lower bound commonly used by contrastive learning, the estimate of predictive information we adopt is exact under a Gaussian assumption. Additionally, it can be computed without negative sampling. To reduce the degeneracy of the latent space extracted by powerful encoders and keep useful information from the inputs, we regularize predictive information learning with a challenging masked reconstruction loss. We demonstrate that our method recovers the latent space of noisy dynamical systems, extracts predictive features for forecasting tasks, and improves automatic speech recognition when used to pretrain the encoder on large amounts of unlabeled data.
Protein synthesis-dependent, late long-term potentiation (LTP) and depression (LTD) at glutamatergic hippocampal synapses are well characterized examples of long-term synaptic plasticity. Persistent increased activity of the enzyme protein kinase M (PKM) is thought essential for maintaining LTP. Additional spatial and temporal features that govern LTP and LTD induction are embodied in the synaptic tagging and capture (STC) and cross capture hypotheses. Only synapses that have been tagged by an stimulus sufficient for LTP and learning can capture PKM. A model was developed to simulate the dynamics of key molecules required for LTP and LTD. The model concisely represents relationships between tagging, capture, LTD, and LTP maintenance. The model successfully simulated LTP maintained by persistent synaptic PKM, STC, LTD, and cross capture, and makes testable predictions concerning the dynamics of PKM. The maintenance of LTP, and consequently of at least some forms of long-term memory, is predicted to require continual positive feedback in which PKM enhances its own synthesis only at potentiated synapses. This feedback underlies bistability in the activity of PKM. Second, cross capture requires the induction of LTD to induce dendritic PKM synthesis, although this may require tagging of a nearby synapse for LTP. The model also simulates the effects of PKM inhibition, and makes additional predictions for the dynamics of CaM kinases. Experiments testing the above predictions would significantly advance the understanding of memory maintenance.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا