No Arabic abstract
With the rapid advancement of technology, different biometric user authentication, and identification systems are emerging. Traditional biometric systems like face, fingerprint, and iris recognition, keystroke dynamics, etc. are prone to cyber-attacks and suffer from different disadvantages. Electroencephalography (EEG) based authentication has shown promise in overcoming these limitations. However, EEG-based authentication is less accurate due to signal variability at different psychological and physiological conditions. On the other hand, keystroke dynamics-based identification offers high accuracy but suffers from different spoofing attacks. To overcome these challenges, we propose a novel multimodal biometric system combining EEG and keystroke dynamics. Firstly, a dataset was created by acquiring both keystroke dynamics and EEG signals from 10 users with 500 trials per user at 10 different sessions. Different statistical, time, and frequency domain features were extracted and ranked from the EEG signals and key features were extracted from the keystroke dynamics. Different classifiers were trained, validated, and tested for both individual and combined modalities for two different classification strategies - personalized and generalized. Results show that very high accuracy can be achieved both in generalized and personalized cases for the combination of EEG and keystroke dynamics. The identification and authentication accuracies were found to be 99.80% and 99.68% for Extreme Gradient Boosting (XGBoost) and Random Forest classifiers, respectively which outperform the individual modalities with a significant margin (around 5 percent). We also developed a binary template matching-based algorithm, which gives 93.64% accuracy 6X faster. The proposed method is secured and reliable for any kind of biometric authentication.
Characteristics and way of behavior of attacks and infiltrators on computer networks are usually very difficult and need an expert In addition; the advancement of computer networks, the number of attacks and infiltrations are also increasing. In fact, the knowledge coming from an expert will lose its value over time and must be updated and made available to the system and this makes the need for the expert person always felt. In machine learning techniques, knowledge is extracted from the data itself which has diminished the role of the expert. Various methods used to detect intrusions, such as statistical models, safe system approach, neural networks, etc., all weaken the fact that it uses all the features of an information packet rotating in the network for intrusion detection. Also, the huge volume of information and the unthinkable state space is also an important issue in the detection of intrusion. Therefore, the need for automatic identification of new and suspicious patterns in an attempt for intrusion with the use of more efficient methods Lower cost and higher performance is needed more than before. The purpose of this study is to provide a new method based on intrusion detection systems and its various architectures aimed at increasing the accuracy of intrusion detection in cloud computing. Keywords : intrusion detection, feature Selection, classification Algorithm, machine learning, neural network.
The application of machine learning (ML) algorithms are massively scaling-up due to rapid digitization and emergence of new tecnologies like Internet of Things (IoT). In todays digital era, we can find ML algorithms being applied in the areas of healthcare, IoT, engineering, finance and so on. However, all these algorithms need to be trained in order to predict/solve a particular problem. There is high possibility of tampering the training datasets and produce biased results. Hence, in this article, we have proposed blockchain based solution to secure the datasets generated from IoT devices for E-Health applications. The proposed blockchain based solution uses using private cloud to tackle the aforementioned issue. For evaluation, we have developed a system that can be used by dataset owners to secure their data.
Biometric systems are the latest technologies of unique identification. People all over the world prefer to use this unique identification technology for their authentication security. The goal of this research is to evaluate the biometric systems based on system reliability and user satisfaction. As technology fully depends on personal data, so in terms of the quality and reliability of biometric systems, user satisfaction is a principal factor. To walk with the digital era, it is extremely important to assess users concerns about data security as the systems are conducted the authentication by analyzing users personal data. The study shows that users are satisfied by using biometric systems rather than other security systems. Besides, hardware failure is a big issue faced by biometric systems users. Finally, a matrix is generated to compare the performance of popular biometric systems from the users opinions. As system reliability and user satisfaction are the focused issue of this research, biometric service providers can use these phenomena to find what aspect of improvement they need for their services. Also, this study can be a great visualizer for Bangladeshi users, so that they can easily realize which biometric system they have to choose.
Narrowband internet-of-things (NB-IoT) is a competitive 5G technology for massive machine-type communication scenarios, but meanwhile introduces narrowband interference (NBI) to existing broadband transmission such as the long term evolution (LTE) systems in enhanced mobile broadband (eMBB) scenarios. In order to facilitate the harmonic and fair coexistence in wireless heterogeneous networks, it is important to eliminate NB-IoT interference to LTE systems. In this paper, a novel sparse machine learning based framework and a sparse combinatorial optimization problem is formulated for accurate NBI recovery, which can be efficiently solved using the proposed iterative sparse learning algorithm called sparse cross-entropy minimization (SCEM). To further improve the recovery accuracy and convergence rate, regularization is introduced to the loss function in the enhanced algorithm called regularized SCEM. Moreover, exploiting the spatial correlation of NBI, the framework is extended to multiple-input multiple-output systems. Simulation results demonstrate that the proposed methods are effective in eliminating NB-IoT interference to LTE systems, and significantly outperform the state-of-the-art methods.
Network intrusion is a well-studied area of cyber security. Current machine learning-based network intrusion detection systems (NIDSs) monitor network data and the patterns within those data but at the cost of presenting significant issues in terms of privacy violations which may threaten end-user privacy. Therefore, to mitigate risk and preserve a balance between security and privacy, it is imperative to protect user privacy with respect to intrusion data. Moreover, cost is a driver of a machine learning-based NIDS because such systems are increasingly being deployed on resource-limited edge devices. To solve these issues, in this paper we propose a NIDS called PCC-LSM-NIDS that is composed of a Pearson Correlation Coefficient (PCC) based feature selection algorithm and a Least Square Method (LSM) based privacy-preserving algorithm to achieve low-cost intrusion detection while providing privacy preservation for sensitive data. The proposed PCC-LSM-NIDS is tested on the benchmark intrusion database UNSW-NB15, using five popular classifiers. The experimental results show that the proposed PCC-LSM-NIDS offers advantages in terms of less computational time, while offering an appropriate degree of privacy protection.