No Arabic abstract
Finite-length codes are learned for the Gaussian wiretap channel in an end-to-end manner assuming that the communication parties are equipped with deep neural networks (DNNs), and communicate through binary phase-shift keying (BPSK) modulation scheme. The goal is to find codes via DNNs which allow a pair of transmitter and receiver to communicate reliably and securely in the presence of an adversary aiming at decoding the secret messages. Following the information-theoretic secrecy principles, the security is evaluated in terms of mutual information utilizing a deep learning tool called MINE (mutual information neural estimation). System performance is evaluated for different DNN architectures, designed based on the existing secure coding schemes, at the transmitter. Numerical results demonstrate that the legitimate parties can indeed establish a secure transmission in this setting as the learned codes achieve points on almost the boundary of the equivocation region.
We propose a novel deep learning method for local self-supervised representation learning that does not require labels nor end-to-end backpropagation but exploits the natural order in data instead. Inspired by the observation that biological neural networks appear to learn without backpropagating a global error signal, we split a deep neural network into a stack of gradient-isolated modules. Each module is trained to maximally preserve the information of its inputs using the InfoNCE bound from Oord et al. [2018]. Despite this greedy training, we demonstrate that each module improves upon the output of its predecessor, and that the representations created by the top module yield highly competitive results on downstream classification tasks in the audio and visual domain. The proposal enables optimizing modules asynchronously, allowing large-scale distributed training of very deep neural networks on unlabelled datasets.
We propose a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in our security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. We not only introduce the notion of secrecy-good lattices, but also propose the {flatness factor} as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, we propose a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No textit{a priori} distribution of the message is assumed, and no dither is used in our proposed schemes.
End-to-end learning of communication systems with neural networks and particularly autoencoders is an emerging research direction which gained popularity in the last year. In this approach, neural networks learn to simultaneously optimize encoding and decoding functions to establish reliable message transmission. In this paper, this line of thinking is extended to communication scenarios in which an eavesdropper must further be kept ignorant about the communication. The secrecy of the transmission is achieved by utilizing a modified secure loss function based on cross-entropy which can be implemented with state-of-the-art machine-learning libraries. This secure loss function approach is applied in a Gaussian wiretap channel setup, for which it is shown that the neural network learns a trade-off between reliable communication and information secrecy by clustering learned constellations. As a result, an eavesdropper with higher noise cannot distinguish between the symbols anymore.
Due to the broadcast nature of the wireless medium, wireless communication is susceptible to adversarial eavesdropping. This paper describes how eavesdropping can potentially be defeated by exploiting the superposition nature of the wireless medium. A Gaussian wire-tap channel with a helping interferer (WTC-HI) is considered in which a transmitter sends confidential messages to its intended receiver in the presence of a passive eavesdropper and with the help of an interferer. The interferer, which does not know the confidential message assists the confidential message transmission by sending a signal that is independent of the transmitted message. An achievable secrecy rate and a Sato-type upper bound on the secrecy capacity are given for the Gaussian WTC-HI. Through numerical analysis, it is found that the upper bound is close to the achievable secrecy rate when the interference is weak for symmetric interference channels, and under more general conditions for asymmetric Gaussian interference channels.
We propose a novel end-to-end neural network architecture that, once trained, directly outputs a probabilistic clustering of a batch of input examples in one pass. It estimates a distribution over the number of clusters $k$, and for each $1 leq k leq k_mathrm{max}$, a distribution over the individual cluster assignment for each data point. The network is trained in advance in a supervised fashion on separate data to learn grouping by any perceptual similarity criterion based on pairwise labels (same/different group). It can then be applied to different data containing different groups. We demonstrate promising performance on high-dimensional data like images (COIL-100) and speech (TIMIT). We call this ``learning to cluster and show its conceptual difference to deep metric learning, semi-supervise clustering and other related approaches while having the advantage of performing learnable clustering fully end-to-end.