ﻻ يوجد ملخص باللغة العربية
Smart speakers and voice-based virtual assistants are core components for the success of the IoT paradigm. Unfortunately, they are vulnerable to various privacy threats exploiting machine learning to analyze the generated encrypted traffic. To cope with that, deep adversarial learning approaches can be used to build black-box countermeasures altering the network traffic (e.g., via packet padding) and its statistical information. This letter showcases the inadequacy of such countermeasures against machine learning attacks with a dedicated experimental campaign on a real network dataset. Results indicate the need for a major re-engineering to guarantee the suitable protection of commercially available smart speakers.
Smart home IoT systems often rely on cloud-based servers for communication between components. Although there exists a body of work on IoT security, most of it focuses on securing clients (i.e., IoT devices). However, cloud servers can also be compro
Currently, Android malware detection is mostly performed on server side against the increasing number of malware. Powerful computing resource provides more exhaustive protection for app markets than maintaining detection by a single user. However, ap
Numerous previous works have studied deep learning algorithms applied in the context of side-channel attacks, which demonstrated the ability to perform successful key recoveries. These studies show that modern cryptographic devices are increasingly t
A smart home connects tens of home devices to the Internet, where an IoT cloud runs various home automation applications. While bringing unprecedented convenience and accessibility, it also introduces various security hazards to users. Prior research
The vulnerability of deep neural networks (DNNs) to adversarial examples is well documented. Under the strong white-box threat model, where attackers have full access to DNN internals, recent work has produced continual advancements in defenses, ofte