Do you want to publish a course? Click here

A Comparative Study of AI-based Intrusion Detection Techniques in Critical Infrastructures

111   0   0.0 ( 0 )
 Added by Safa Otoum
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Volunteer computing uses Internet-connected devices (laptops, PCs, smart devices, etc.), in which their owners volunteer them as storage and computing power resources, has become an essential mechanism for resource management in numerous applications. The growth of the volume and variety of data traffic in the Internet leads to concerns on the robustness of cyberphysical systems especially for critical infrastructures. Therefore, the implementation of an efficient Intrusion Detection System for gathering such sensory data has gained vital importance. In this paper, we present a comparative study of Artificial Intelligence (AI)-driven intrusion detection systems for wirelessly connected sensors that track crucial applications. Specifically, we present an in-depth analysis of the use of machine learning, deep learning and reinforcement learning solutions to recognize intrusive behavior in the collected traffic. We evaluate the proposed mechanisms by using KD99 as real attack data-set in our simulations. Results present the performance metrics for three different IDSs namely the Adaptively Supervised and Clustered Hybrid IDS (ASCH-IDS), Restricted Boltzmann Machine-based Clustered IDS (RBC-IDS) and Q-learning based IDS (QL-IDS) to detect malicious behaviors. We also present the performance of different reinforcement learning techniques such as State-Action-Reward-State-Action Learning (SARSA) and the Temporal Difference learning (TD). Through simulations, we show that QL-IDS performs with 100% detection rate while SARSA-IDS and TD-IDS perform at the order of 99.5%.



rate research

Read More

Software vulnerabilities are usually caused by design flaws or implementation errors, which could be exploited to cause damage to the security of the system. At present, the most commonly used method for detecting software vulnerabilities is static analysis. Most of the related technologies work based on rules or code similarity (source code level) and rely on manually defined vulnerability features. However, these rules and vulnerability features are difficult to be defined and designed accurately, which makes static analysis face many challenges in practical applications. To alleviate this problem, some researchers have proposed to use neural networks that have the ability of automatic feature extraction to improve the intelligence of detection. However, there are many types of neural networks, and different data preprocessing methods will have a significant impact on model performance. It is a great challenge for engineers and researchers to choose a proper neural network and data preprocessing method for a given problem. To solve this problem, we have conducted extensive experiments to test the performance of the two most typical neural networks (i.e., Bi-LSTM and RVFL) with the two most classical data preprocessing methods (i.e., the vector representation and the program symbolization methods) on software vulnerability detection problems and obtained a series of interesting research conclusions, which can provide valuable guidelines for researchers and engineers. Specifically, we found that 1) the training speed of RVFL is always faster than BiLSTM, but the prediction accuracy of Bi-LSTM model is higher than RVFL; 2) using doc2vec for vector representation can make the model have faster training speed and generalization ability than using word2vec; and 3) multi-level symbolization is helpful to improve the precision of neural network models.
As data traffic volume continues to increase, caching of popular content at strategic network locations closer to the end user can enhance not only user experience but ease the utilization of highly congested links in the network. A key challenge in the area of proactive caching is finding the optimal locations to host the popular content items under various optimization criteria. These problems are combinatorial in nature and therefore finding optimal and/or near optimal decisions is computationally expensive. In this paper a framework is proposed to reduce the computational complexity of the underlying integer mathematical program by first predicting decision variables related to optimal locations using a deep convolutional neural network (CNN). The CNN is trained in an offline manner with optimal solutions and is then used to feed a much smaller optimization problems which is amenable for real-time decision making. Numerical investigations reveal that the proposed approach can provide in an online manner high quality decision making; a feature which is crucially important for real-world implementations.
Internet of Things (IoT) is an Internet-based environment of connected devices and applications. IoT creates an environment where physical devices and sensors are flawlessly combined into information nodes to deliver innovative and smart services for human-being to make their life easier and more efficient. The main objective of the IoT devices-network is to generate data, which are converted into useful information by the data analysis process, it also provides useful resources to the end users. IoT resource management is a key challenge to ensure the quality of end user experience. Many IoT smart devices and technologies like sensors, actuators, RFID, UMTS, 3G, and GSM etc. are used to develop IoT networks. Cloud Computing plays an important role in these networks deployment by providing physical resources as virtualized resources consist of memory, computation power, network bandwidth, virtualized system and device drivers in secure and pay as per use basis. One of the major concerns of Cloud-based IoT environment is resource management, which ensures efficient resource utilization, load balancing, reduce SLA violation, and improve the system performance by reducing operational cost and energy consumption. Many researchers have been proposed IoT based resource management techniques. The focus of this paper is to investigate these proposed resource allocation techniques and finds which parameters must be considered for improvement in resource allocation for IoT networks. Further, this paper also uncovered challenges and issues of Cloud-based resource allocation for IoT environment.
Accurate predictions of reactive mixing are critical for many Earth and environmental science problems. To investigate mixing dynamics over time under different scenarios, a high-fidelity, finite-element-based numerical model is built to solve the fast, irreversible bimolecular reaction-diffusion equations to simulate a range of reactive-mixing scenarios. A total of 2,315 simulations are performed using different sets of model input parameters comprising various spatial scales of vortex structures in the velocity field, time-scales associated with velocity oscillations, the perturbation parameter for the vortex-based velocity, anisotropic dispersion contrast, and molecular diffusion. Outputs comprise concentration profiles of the reactants and products. The inputs and outputs of these simulations are concatenated into feature and label matrices, respectively, to train 20 different machine learning (ML) emulators to approximate system behavior. The 20 ML emulators based on linear methods, Bayesian methods, ensemble learning methods, and multilayer perceptron (MLP), are compared to assess these models. The ML emulators are specifically trained to classify the state of mixing and predict three quantities of interest (QoIs) characterizing species production, decay, and degree of mixing. Linear classifiers and regressors fail to reproduce the QoIs; however, ensemble methods (classifiers and regressors) and the MLP accurately classify the state of reactive mixing and the QoIs. Among ensemble methods, random forest and decision-tree-based AdaBoost faithfully predict the QoIs. At run time, trained ML emulators are $approx10^5$ times faster than the high-fidelity numerical simulations. Speed and accuracy of the ensemble and MLP models facilitate uncertainty quantification, which usually requires 1,000s of model run, to estimate the uncertainty bounds on the QoIs.
Cybersecurity is a domain where the data distribution is constantly changing with attackers exploring newer patterns to attack cyber infrastructure. Intrusion detection system is one of the important layers in cyber safety in todays world. Machine learning based network intrusion detection systems started showing effective results in recent years. With deep learning models, detection rates of network intrusion detection system are improved. More accurate the model, more the complexity and hence less the interpretability. Deep neural networks are complex and hard to interpret which makes difficult to use them in production as reasons behind their decisions are unknown. In this paper, we have used deep neural network for network intrusion detection and also proposed explainable AI framework to add transparency at every stage of machine learning pipeline. This is done by leveraging Explainable AI algorithms which focus on making ML models less of black boxes by providing explanations as to why a prediction is made. Explanations give us measurable factors as to what features influence the prediction of a cyberattack and to what degree. These explanations are generated from SHAP, LIME, Contrastive Explanations Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply these approaches to NSL KDD dataset for intrusion detection system and demonstrate results.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا