ترغب بنشر مسار تعليمي؟ اضغط هنا

Groot: An Event-graph-based Approach for Root Cause Analysis in Industrial Settings

98   0   0.0 ( 0 )
 نشر من قبل Hanzhang Wang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

For large-scale distributed systems, its crucial to efficiently diagnose the root causes of incidents to maintain high system availability. The recent development of microservice architecture brings three major challenges (i.e., operation, system scale, and monitoring complexities) to root cause analysis (RCA) in industrial settings. To tackle these challenges, in this paper, we present Groot, an event-graph-based approach for RCA. Groot constructs a real-time causality graph based on events that summarize various types of metrics, logs, and activities in the system under analysis. Moreover, to incorporate domain knowledge from site reliability engineering (SRE) engineers, Groot can be customized with user-defined events and domain-specific rules. Currently, Groot supports RCA among 5,000 real production services and is actively used by the SRE teamin a global e-commerce system serving more than 185 million active buyers per year. Over 15 months, we collect a data setcontaining labeled root causes of 952 real production incidents for evaluation. The evaluation results show that Groot is able to achieve 95% top-3 accuracy and 78% top-1 accuracy. To share our experience in deploying and adopting RCA in industrial settings, we conduct survey to show that users of Grootfindit helpful and easy to use. We also share the lessons learnedfrom deploying and adopting Grootto solve RCA problems inproduction environments.

قيم البحث

اقرأ أيضاً

We describe a formal approach to identify root causes of outliers observed in $n$ variables $X_1,dots,X_n$ in a scenario where the causal relation between the variables is a known directed acyclic graph (DAG). To this end, we first introduce a system atic way to define outlier scores. Further, we introduce the concept of conditional outlier score which measures whether a value of some variable is unexpected *given the value of its parents* in the DAG, if one were to assume that the causal structure and the corresponding conditional distributions are also valid for the anomaly. Finally, we quantify to what extent the high outlier score of some target variable can be attributed to outliers of its ancestors. This quantification is defined via Shapley values from cooperative game theory.
146 - Dewei Liu , Chuan He , Xin Peng 2021
Availability issues of industrial microservice systems (e.g., drop of successfully placed orders and processed transactions) directly affect the running of the business. These issues are usually caused by various types of service anomalies which prop agate along service dependencies. Accurate and high-efficient root cause localization is thus a critical challenge for large-scale industrial microservice systems. Existing approaches use service dependency graph based analysis techniques to automatically locate root causes. However, these approaches are limited due to their inaccurate detection of service anomalies and inefficient traversing of service dependency graph. In this paper, we propose a high-efficient root cause localization approach for availability issues of microservice systems, called MicroHECL. Based on a dynamically constructed service call graph, MicroHECL analyzes possible anomaly propagation chains, and ranks candidate root causes based on correlation analysis. We combine machine learning and statistical methods and design customized models for the detection of different types of service anomalies (i.e., performance, reliability, traffic). To improve the efficiency, we adopt a pruning strategy to eliminate irrelevant service calls in anomaly propagation chain analysis. Experimental studies show that MicroHECL significantly outperforms two state-of-the-art baseline approaches in terms of both accuracy and efficiency. MicroHECL has been used in Alibaba and achieves a top-3 hit ratio of 68% with root cause localization time reduced from 30 minutes to 5 minutes.
Context: Safety analysis is a predominant activity in developing safety-critical systems. It is a highly cooperative task among multiple functional departments due to increasingly sophisticated safety-critical systems and close-knit development proce sses. Communication occurs pervasively. Motivation: Effective communication channels among multiple functional departments influence safety analysis, quality as well as a safe product delivery. However, the use of communication channels during safety analysis is sometimes arbitrary and poses challenges. Objective: Investige the existing communication channels, their usage frequencies, their purposes and challenges during safety analysis in industry.. Method: Multiple case study of experts (survey: 39, interview: 21) in safety-critical companies including software developers, quality engineers and functional safety managers. Direct observations and documentation review were also conducted. Results: Popular communication channels during safety analysis include formal meetings, project coordination tools, documentation and telephone. Email, personal discussion, training, internal communication software and boards are also in use. Training involving safety analysis happens 1-4 times per year, while other aforementioned communication channels happen ranges from 1-4 times per day to 1-4 times per month. We summarise 28 purposes for these communication channels. Communication happens mostly for the purpose of clarifying safety requirements, fixing temporary problems, conflicts and obstacles and sharing safety knowledge. The top challenges are reported. Conclusion: During safety analysis, to use communication channels effectively and avoid challenges, a clear purpose of communication during safety analysis should be established at the beginning. To derive countermeasures of fixing the top 10 challenges are potential next steps.
115 - Yamine Ait Ameur 2018
This paper reports on the results of the French ANR IMPEX research project dealing with making explicit domain knowledge in design models. Ontologies are formalised as theories with sets, axioms, theorems and reasoning rules. They are integrated to d esign models through an annotation mechanism. Event-B has been chosen as the ground formal modelling technique for all our developments. In this paper, we particularly describe how ontologies are formalised as Event-B theories.
Software Product Lines (SPLs) are families of related software products developed from a common set of artifacts. Most existing analysis tools can be applied to a single product at a time, but not to an entire SPL. Some tools have been redesigned/re- implemented to support the kind of variability exhibited in SPLs, but this usually takes a lot of effort, and is error-prone. Declarative analyses written in languages like Datalog have been collectively lifted to SPLs in prior work, which makes the process of applying an existing declarative analysis to a product line more straightforward. In this paper, we take an existing declarative analysis (behaviour alteration) written in the Grok declarative language, port it to Datalog, and apply it to a set of automotive software product lines from General Motors. We discuss the design of the analysis pipeline used in this process, present its scalability results, and provide a means to visualize the analysis results for a subset of products filtered by feature expression. We also reflect on some of the lessons learned throughout this project.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا