ﻻ يوجد ملخص باللغة العربية
Due to the advances of sensing and storage technologies, a tremendous amount of data becomes available and, it supports the phenomenal growth of artificial intelligence (AI) techniques especially, deep learning (DL), in various application domains. While the data sources become valuable assets for enabling the success of autonomous decision-making, they also lead to critical vulnerabilities in privacy and security. For example, data leakage can be exploited via querying and eavesdropping in the exploratory phase for black-box attacks against DL-based autonomous decision-making systems. To address this issue, in this work, we propose a novel data encryption method, called AdvEncryption, by exploiting the principle of adversarial attacks. Different from existing encryption technologies, the AdvEncryption method is not developed to prevent attackers from exploiting the dataset. Instead, our proposed method aims to trap the attackers in a misleading feature distillation of the data. To achieve this goal, our AdvEncryption method consists of two essential components: 1) an adversarial attack-inspired encryption mechanism to encrypt the data with stealthy adversarial perturbation, and 2) a decryption mechanism that minimizes the impact of the perturbations on the effectiveness of autonomous decision making. In the performance evaluation section, we evaluate the performance of our proposed AdvEncryption method through case studies considering different scenarios.
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. Howev
Recently, the membership inference attack poses a serious threat to the privacy of confidential training data of machine learning models. This paper proposes a novel adversarial example based privacy-preserving technique (AEPPT), which adds the craft
Today, two-factor authentication (2FA) is a widely implemented mechanism to counter phishing attacks. Although much effort has been investigated in 2FA, most 2FA systems are still vulnerable to carefully designed phishing attacks, and some even reque
Being an emerging class of in-memory computing architecture, brain-inspired hyperdimensional computing (HDC) mimics brain cognition and leverages random hypervectors (i.e., vectors with a dimensionality of thousands or even more) to represent feature
A class of data integrity attack, known as false data injection (FDI) attack, has been studied with a considerable amount of work. It has shown that with perfect knowledge of the system model and the capability to manipulate a certain number of measu