No Arabic abstract
Internet of Things (IoT) aims at providing connectivity between every computing entity. However, this facilitation is also leading to more cyber threats which may exploit the presence of a vulnerability of a period of time. One such vulnerability is the zero-day threat that may lead to zero-day attacks which are detrimental to an enterprise as well as the network security. In this article, a study is presented on the zero-day threats for IoT networks and a context graph-based framework is presented to provide a strategy for mitigating these attacks. The proposed approach uses a distributed diagnosis system for classifying the context at the central service provider as well as at the local user site. Once a potential zero-day attack is identified, a critical data sharing protocol is used to transmit alert messages and reestablish the trust between the network entities and the IoT devices. The results show that the distributed approach is capable of mitigating the zero-day threats efficiently with 33% and 21% improvements in terms of cost of operation and communication overheads, respectively, in comparison with the centralized diagnosis system.
The popularity of the Internet of Things (IoT) devices makes it increasingly important to be able to fingerprint them, for example in order to detect if there are misbehaving or even malicious IoT devices in ones network. The aim of this paper is to provide a systematic categorisation of machine learning augmented techniques that can be used for fingerprinting IoT devices. This can serve as a baseline for comparing various IoT fingerprinting mechanisms, so that network administrators can choose one or more mechanisms that are appropriate for monitoring and maintaining their network. We carried out an extensive literature review of existing papers on fingerprinting IoT devices -- paying close attention to those with machine learning features. This is followed by an extraction of important and comparable features among the mechanisms outlined in those papers. As a result, we came up with a key set of terminologies that are relevant both in the fingerprinting context and in the IoT domain. This enabled us to construct a framework called IDWork, which can be used for categorising existing IoT fingerprinting mechanisms in a way that will facilitate a coherent and fair comparison of these mechanisms. We found that the majority of the IoT fingerprinting mechanisms take a passive approach -- mainly through network sniffing -- instead of being intrusive and interactive with the device of interest. Additionally, a significant number of the surveyed mechanisms employ both static and dynamic approaches, in order to benefit from complementary features that can be more robust against certain attacks such as spoofing and replay attacks.
Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments also include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation.
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-$1$ label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small $ell_p$ norms for $p ge 1$, which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for $p=2$, we theoretically show that our algorithm actually converges to the minimal $ell_2$-perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.
With an enormous range of applications, Internet of Things (IoT) has magnetized industries and academicians from everywhere. IoT facilitates operations through ubiquitous connectivity by providing Internet access to all the devices with computing capabilities. With the evolution of wireless infrastructure, the focus from simple IoT has been shifted to smart, connected and mobile IoT (M-IoT) devices and platforms, which can enable low-complexity, low-cost and efficient computing through sensors, machines, and even crowdsourcing. All these devices can be grouped under a common term of M-IoT. Even though the positive impact on applications has been tremendous, security, privacy and trust are still the major concerns for such networks and an insufficient enforcement of these requirements introduces non-negligible threats to M-IoT devices and platforms. Thus, it is important to understand the range of solutions which are available for providing a secure, privacy-compliant, and trustworthy mechanism for M-IoT. There is no direct survey available, which focuses on security, privacy, trust, secure protocols, physical layer security and handover protections in M-IoT. This paper covers such requisites and presents comparisons of state-the-art solutions for IoT which are applicable to security, privacy, and trust in smart and connected M-IoT networks. Apart from these, various challenges, applications, advantages, technologies, standards, open issues, and roadmap for security, privacy and trust are also discussed in this paper.
Internet-of-Things (IoT) devices that are limited in power and processing are susceptible to physical layer (PHY) spoofing (signal exploitation) attacks owing to their inability to implement a full-blown protocol stack for security. The overwhelming adoption of multicarrier techniques such as orthogonal frequency division multiplexing (OFDM) for the PHY layer makes IoT devices further vulnerable to PHY spoofing attacks. These attacks which aim at injecting bogus/spurious data into the receiver, involve inferring transmission parameters and finding PHY characteristics of the transmitted signals so as to spoof the received signal. Non-contiguous (NC) OFDM systems have been argued to have low probability of exploitation (LPE) characteristics against classic attacks based on cyclostationary analysis, and the corresponding PHY has been deemed to be secure. However, with the advent of machine learning (ML) algorithms, adversaries can devise data-driven attacks to compromise such systems. It is in this vein that PHY spoofing performance of adversaries equipped with supervised and unsupervised ML tools are investigated in this paper. The supervised ML approach is based on deep neural networks (DNN) while the unsupervised one employs variational autoencoders (VAEs). In particular, VAEs are shown to be capable of learning representations from NC-OFDM signals related to their PHY characteristics such as frequency pattern and modulation scheme, which are useful for PHY spoofing. In addition, a new metric based on the disentanglement principle is proposed to measure the quality of such learned representations. Simulation results demonstrate that the performance of the spoofing adversaries highly depends on the subcarriers allocation patterns. Particularly, it is shown that utilizing a random subcarrier occupancy pattern secures NC-OFDM systems against ML-based attacks.