ترغب بنشر مسار تعليمي؟ اضغط هنا

IASelect: Finding Best-fit Agent Practices in Industrial CPS Using Graph Databases

110   0   0.0 ( 0 )
 نشر من قبل Roopak Sinha
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The ongoing fourth Industrial Revolution depends mainly on robust Industrial Cyber-Physical Systems (ICPS). ICPS includes computing (software and hardware) abilities to control complex physical processes in distributed industrial environments. Industrial agents, originating from the well-established multi-agent systems field, provide complex and cooperative control mechanisms at the software level, allowing us to develop larger and more feature-rich ICPS. The IEEE P2660.1 standardisation project, Recommended Practices on Industrial Agents: Integration of Software Agents and Low Level Automation Functions focuses on identifying Industrial Agent practices that can benefit ICPS systems of the future. A key problem within this project is identifying the best-fit industrial agent practices for a given ICPS. This paper reports on the design and development of a tool to address this challenge. This tool, called IASelect, is built using graph databases and provides the ability to flexibly and visually query a growing repository of industrial agent practices relevant to ICPS. IASelect includes a front-end that allows industry practitioners to interactively identify best-fit practices without having to write manual queries.



قيم البحث

اقرأ أيضاً

Much of biomedical and healthcare data is encoded in discrete, symbolic form such as text and medical codes. There is a wealth of expert-curated biomedical domain knowledge stored in knowledge bases and ontologies, but the lack of reliable methods fo r learning knowledge representation has limited their usefulness in machine learning applications. While text-based representation learning has significantly improved in recent years through advances in natural language processing, attempts to learn biomedical concept embeddings so far have been lacking. A recent family of models called knowledge graph embeddings have shown promising results on general domain knowledge graphs, and we explore their capabilities in the biomedical domain. We train several state-of-the-art knowledge graph embedding models on the SNOMED-CT knowledge graph, provide a benchmark with comparison to existing methods and in-depth discussion on best practices, and make a case for the importance of leveraging the multi-relational nature of knowledge graphs for learning biomedical knowledge representation. The embeddings, code, and materials will be made available to the communitY.
Recently, a comprehensive Bayesian analysis was performed to simultaneously extract the values of a number of hydrodynamic parameters necessary for compatibility with a limited set of experimental data from the LHC. In this work, this best-fit model is tested against newly measured experimental flow results not included in the original work, namely the principal components of the two-particle correlation matrix in transverse momentum. The results from simulations show a good numerical agreement with data obtained by the CMS Collaboration.
In this paper, we explore using runtime verification to design safe cyber-physical systems (CPS). We build upon the Simplex Architecture, where control authority may switch from an unverified and potentially unsafe advanced controller to a backup bas eline controller in order to maintain system safety. New to our approach, we remove the requirement that the baseline controller is statically verified. This is important as there are many types of powerful control techniques -- model-predictive control, rapidly-exploring random trees and neural network controllers -- that often work well in practice, but are difficult to statically prove correct, and therefore could not be used before as baseline controllers. We prove that, through more extensive runtime checks, such an approach can still guarantee safety. We call this approach the Black-Box Simplex Architecture, as both high-level controllers are treated as black boxes. We present case studies where model-predictive control provides safety for multi-robot coordination, and neural networks provably prevent collisions for groups of F-16 aircraft, despite occasionally outputting unsafe actions.
For large-scale distributed systems, its crucial to efficiently diagnose the root causes of incidents to maintain high system availability. The recent development of microservice architecture brings three major challenges (i.e., operation, system sca le, and monitoring complexities) to root cause analysis (RCA) in industrial settings. To tackle these challenges, in this paper, we present Groot, an event-graph-based approach for RCA. Groot constructs a real-time causality graph based on events that summarize various types of metrics, logs, and activities in the system under analysis. Moreover, to incorporate domain knowledge from site reliability engineering (SRE) engineers, Groot can be customized with user-defined events and domain-specific rules. Currently, Groot supports RCA among 5,000 real production services and is actively used by the SRE teamin a global e-commerce system serving more than 185 million active buyers per year. Over 15 months, we collect a data setcontaining labeled root causes of 952 real production incidents for evaluation. The evaluation results show that Groot is able to achieve 95% top-3 accuracy and 78% top-1 accuracy. To share our experience in deploying and adopting RCA in industrial settings, we conduct survey to show that users of Grootfindit helpful and easy to use. We also share the lessons learnedfrom deploying and adopting Grootto solve RCA problems inproduction environments.
We present a novel approach called Optimized Directed Roadmap Graph (ODRM). It is a method to build a directed roadmap graph that allows for collision avoidance in multi-robot navigation. This is a highly relevant problem, for example for industrial autonomous guided vehicles. The core idea of ODRM is, that a directed roadmap can encode inherent properties of the environment which are useful when agents have to avoid each other in that same environment. Like Probabilistic Roadmaps (PRMs), ODRMs first step is generating samples from C-space. In a second step, ODRM optimizes vertex positions and edge directions by Stochastic Gradient Descent (SGD). This leads to emergent properties like edges parallel to walls and patterns similar to two-lane streets or roundabouts. Agents can then navigate on this graph by searching their path independently and solving occurring agent-agent collisions at run-time. Using the graphs generated by ODRM compared to a non-optimized graph significantly fewer agent-agent collisions happen. We evaluate our roadmap with both, centralized and decentralized planners. Our experiments show that with ODRM even a simple centralized planner can solve problems with high numbers of agents that other multi-agent planners can not solve. Additionally, we use simulated robots with decentralized planners and online collision avoidance to show how agents are a lot faster on our roadmap than on standard grid maps.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا