No Arabic abstract
In this paper we present methods for attacking and defending $k$-gram statistical analysis techniques that are used, for example, in network traffic analysis and covert channel detection. The main new result is our demonstration of how to use a behaviors or process $k$-order statistics to build a stochastic process that has those same $k$-order stationary statistics but possesses different, deliberately designed, $(k+1)$-order statistics if desired. Such a model realizes a complexification of the process or behavior which a defender can use to monitor whether an attacker is shaping the behavior. By deliberately introducing designed $(k+1)$-order behaviors, the defender can check to see if those behaviors are present in the data. We also develop constructs for source codes that respect the $k$-order statistics of a process while encoding covert information. One fundamental consequence of these results is that certain types of behavior analyses techniques come down to an {em arms race} in the sense that the advantage goes to the party that has more computing resources applied to the problem.
The purpose of the covert communication system is to implement the communication process without causing third party perception. In order to achieve complete covert communication, two aspects of security issues need to be considered. The first one is to cover up the existence of information, that is, to ensure the content security of information; the second one is to cover up the behavior of transmitting information, that is, to ensure the behavioral security of communication. However, most of the existing information hiding models are based on the Prisoners Model, which only considers the content security of carriers, while ignoring the behavioral security of the sender and receiver. We think that this is incomplete for the security of covert communication. In this paper, we propose a new covert communication framework, which considers both content security and behavioral security in the process of information transmission. In the experimental part, we analyzed a large amount of collected real Twitter data to illustrate the security risks that may be brought to covert communication if we only consider content security and neglect behavioral security. Finally, we designed a toy experiment, pointing out that in addition to most of the existing content steganography, under the proposed new framework of covert communication, we can also use users behavior to implement behavioral steganography. We hope this new proposed framework will help researchers to design better covert communication systems.
CSI (Channel State Information) of WiFi systems contains the environment channel response between the transmitter and the receiver, so the people/objects and their movement in between can be sensed. To get CSI, the receiver performs channel estimation based on the pre-known training field of the transmitted WiFi signal. CSI related technology is useful in many cases, but it also brings concerns on privacy and security. In this paper, we open sourced a CSI fuzzer to enhance the privacy and security of WiFi CSI applications. It is built and embedded into the transmitter of openwifi, which is an open source full-stack WiFi chip design, to prevent unauthorized sensing without sacrificing the WiFi link performance. The CSI fuzzer imposes an artificial channel response to the signal before it is transmitted, so the CSI seen by the receiver will indicate the actual channel response combined with the artificial response. Only the authorized receiver, that knows the artificial response, can calculate the actual channel response and perform the CSI sensing. Another potential application of the CSI fuzzer is covert channels based on a set of pre-defined artificial response patterns. Our work resolves the pain point of implementing the anti-sensing idea based on the commercial off-the-shelf WiFi devices.
Offline Reinforcement Learning (RL) aims to extract near-optimal policies from imperfect offline data without additional environment interactions. Extracting policies from diverse offline datasets has the potential to expand the range of applicability of RL by making the training process safer, faster, and more streamlined. We investigate how to improve the performance of offline RL algorithms, its robustness to the quality of offline data, as well as its generalization capabilities. To this end, we introduce Offline Model-based RL with Adaptive Behavioral Priors (MABE). Our algorithm is based on the finding that dynamics models, which support within-domain generalization, and behavioral priors, which support cross-domain generalization, are complementary. When combined together, they substantially improve the performance and generalization of offline RL policies. In the widely studied D4RL offline RL benchmark, we find that MABE achieves higher average performance compared to prior model-free and model-based algorithms. In experiments that require cross-domain generalization, we find that MABE outperforms prior methods. Our website is available at https://sites.google.com/berkeley.edu/mabe .
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks. Despite the great success, recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs prediction by modifying graphs. On the other hand, the explanation of GNNs (GNNExplainer) provides a better understanding of a trained GNN model by generating a small subgraph and features that are most influential for its prediction. In this paper, we first perform empirical studies to validate that GNNExplainer can act as an inspection tool and have the potential to detect the adversarial perturbations for graphs. This finding motivates us to further initiate a new problem investigation: Whether a graph neural network and its explanations can be jointly attacked by modifying graphs with malicious desires? It is challenging to answer this question since the goals of adversarial attacks and bypassing the GNNExplainer essentially contradict each other. In this work, we give a confirmative answer to this question by proposing a novel attack framework (GEAttack), which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities. Extensive experiments on two explainers (GNNExplainer and PGExplainer) under various real-world datasets demonstrate the effectiveness of the proposed method.
Extracting the interaction rules of biological agents from moving sequences pose challenges in various domains. Granger causality is a practical framework for analyzing the interactions from observed time-series data; however, this framework ignores the structures of the generative process in animal behaviors, which may lead to interpretational problems and sometimes erroneous assessments of causality. In this paper, we propose a new framework for learning Granger causality from multi-animal trajectories via augmented theory-based behavioral models with interpretable data-driven models. We adopt an approach for augmenting incomplete multi-agent behavioral models described by time-varying dynamical systems with neural networks. For efficient and interpretable learning, our model leverages theory-based architectures separating navigation and motion processes, and the theory-guided regularization for reliable behavioral modeling. This can provide interpretable signs of Granger-causal effects over time, i.e., when specific others cause the approach or separation. In experiments using synthetic datasets, our method achieved better performance than various baselines. We then analyzed multi-animal datasets of mice, flies, birds, and bats, which verified our method and obtained novel biological insights.