Do you want to publish a course? Click here

Safe Handover in Mixed-Initiative Control for Cyber-Physical Systems

74   0   0.0 ( 0 )
 Added by Ernie Chang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

For mixed-initiative control between cyber-physical systems (CPS) and its users, it is still an open question how machines can safely hand over control to humans. In this work, we propose a concept to provide technological support that uses formal methods from AI -- description logic (DL) and automated planning -- to predict more reliably when a hand-over is necessary, and to increase the advance notice for handovers by planning ahead of runtime. We combine this with methods from human-computer interaction (HCI) and natural language generation (NLG) to develop solutions for safe and smooth handovers and provide an example autonomous driving scenario. A study design is proposed with the assessment of qualitative feedback, cognitive load and trust in automation.

rate research

Read More

This paper presents a cognitive behavioral-based driver mood repairment platform in intelligent transportation cyber-physical systems (IT-CPS) for road safety. In particular, we propose a driving safety platform for distracted drivers, namely emph{drive safe}, in IT-CPS. The proposed platform recognizes the distracting activities of the drivers as well as their emotions for mood repair. Further, we develop a prototype of the proposed drive safe platform to establish proof-of-concept (PoC) for the road safety in IT-CPS. In the developed driving safety platform, we employ five AI and statistical-based models to infer a vehicle drivers cognitive-behavioral mining to ensure safe driving during the drive. Especially, capsule network (CN), maximum likelihood (ML), convolutional neural network (CNN), Apriori algorithm, and Bayesian network (BN) are deployed for driver activity recognition, environmental feature extraction, mood recognition, sequential pattern mining, and content recommendation for affective mood repairment of the driver, respectively. Besides, we develop a communication module to interact with the systems in IT-CPS asynchronously. Thus, the developed drive safe PoC can guide the vehicle drivers when they are distracted from driving due to the cognitive-behavioral factors. Finally, we have performed a qualitative evaluation to measure the usability and effectiveness of the developed drive safe platform. We observe that the P-value is 0.0041 (i.e., < 0.05) in the ANOVA test. Moreover, the confidence interval analysis also shows significant gains in prevalence value which is around 0.93 for a 95% confidence level. The aforementioned statistical results indicate high reliability in terms of drivers safety and mental state.
We consider malicious attacks on actuators and sensors of a feedback system which can be modeled as additive, possibly unbounded, disturbances at the digital (cyber) part of the feedback loop. We precisely characterize the role of the unstable poles and zeros of the system in the ability to detect stealthy attacks in the context of the sampled data implementation of the controller in feedback with the continuous (physical) plant. We show that, if there is a single sensor that is guaranteed to be secure and the plant is observable from that sensor, then there exist a class of multirate sampled data controllers that ensure that all attacks remain detectable. These dual rate controllers are sampling the output faster than the zero order hold rate that operates on the control input and as such, they can even provide better nominal performance than single rate, at the price of higher sampling of the continuous output.
We introduce a novel learning-based approach to synthesize safe and robust controllers for autonomous Cyber-Physical Systems and, at the same time, to generate challenging tests. This procedure combines formal methods for model verification with Generative Adversarial Networks. The method learns two Neural Networks: the first one aims at generating troubling scenarios for the controller, while the second one aims at enforcing the safety constraints. We test the proposed method on a variety of case studies.
Modeling complex systems is a time-consuming, difficult and fragmented task, often requiring the analyst to work with disparate data, a variety of models, and expert knowledge across a diverse set of domains. Applying a user-centered design process, we developed a mixed-initiative visual analytics approach, a subset of the Causemos platform, that allows analysts to rapidly assemble qualitative causal models of complex socio-natural systems. Our approach facilitates the construction, exploration, and curation of qualitative models bringing together data across disparate domains. Referencing a recent user evaluation, we demonstrate our approachs ability to interactively enrich user mental models and accelerate qualitative model building.
High performance but unverified controllers, e.g., artificial intelligence-based (a.k.a. AI-based) controllers, are widely employed in cyber-physical systems (CPSs) to accomplish complex control missions. However, guaranteeing the safety and reliability of CPSs with this kind of controllers is currently very challenging, which is of vital importance in many real-life safety-critical applications. To cope with this difficulty, we propose in this work a Safe-visor architecture for sandboxing unverified controllers in CPSs operating in noisy environments (a.k.a. stochastic CPSs). The proposed architecture contains a history-based supervisor, which checks inputs from the unverified controller and makes a compromise between functionality and safety of the system, and a safety advisor that provides fallback when the unverified controller endangers the safety of the system. Both the history-based supervisor and the safety advisor are designed based on an approximate probabilistic relation between the original system and its finite abstraction. By employing this architecture, we provide formal probabilistic guarantees on preserving the safety specifications expressed by accepting languages of deterministic finite automata (DFA). Meanwhile, the unverified controllers can still be employed in the control loop even though they are not reliable. We demonstrate the effectiveness of our proposed results by applying them to two (physical) case studies.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا