We present a novel end-to-end autoencoder-based learning for coherent optical communications using a parallelizable perturbative channel model. We jointly optimized constellation shaping and nonlinear pre-emphasis achieving mutual information gain of 0.18 bits/sym./pol. simulating 64 GBd dual-polarization single-channel transmission over 30x80 km G.652 SMF link with EDFAs.
We investigate methods for experimental performance enhancement of auto-encoders based on a recurrent neural network (RNN) for communication over dispersive nonlinear channels. In particular, our focus is on the recently proposed sliding window bidir
ectional RNN (SBRNN) optical fiber autoencoder. We show that adjusting the processing window in the sequence estimation algorithm at the receiver improves the reach of simple systems trained on a channel model and applied as is to the transmission link. Moreover, the collected experimental data was used to optimize the receiver neural network parameters, allowing to transmit 42 Gb/s with bit-error rate (BER) below the 6.7% hard-decision forward error correction threshold at distances up to 70km as well as 84 Gb/s at 20 km. The investigation of digital signal processing (DSP) optimized on experimental data is extended to pulse amplitude modulation with receivers performing sliding window sequence estimation using a feed-forward or a recurrent neural network as well as classical nonlinear Volterra equalization. Our results show that, for fixed algorithm memory, the DSP based on deep learning achieves an improved BER performance, allowing to increase the reach of the system.
Optimizing modulation and detection strategies for a given channel is critical to maximize the throughput of a communication system. Such an optimization can be easily carried out analytically for channels that admit closed-form analytical models. Ho
wever, this task becomes extremely challenging for nonlinear dispersive channels such as the optical fiber. End-to-end optimization through autoencoders (AEs) can be applied to define symbol-to-waveform (modulation) and waveform-to-symbol (detection) mappings, but so far it has been mainly shown for systems relying on approximate channel models. Here, for the first time, we propose an AE scheme applied to the full optical channel described by the nonlinear Schr{o}dinger equation (NLSE). Transmitter and receiver are jointly optimized through the split-step Fourier method (SSFM) which accurately models an optical fiber. In this first numerical analysis, the detection is performed by a neural network (NN), whereas the symbol-to-waveform mapping is aided by the nonlinear Fourier transform (NFT) theory in order to simplify and guide the optimization on the modulation side. This proof-of-concept AE scheme is thus benchmarked against a standard NFT-based system and a threefold increase in achievable distance (from 2000 to 6640 km) is demonstrated.
An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroence
phalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.
Recently, deep learning is considered to optimize the end-to-end performance of digital communication systems. The promise of learning a digital communication scheme from data is attractive, since this makes the scheme adaptable and precisely tunable
to many scenarios and channel models. In this paper, we analyse a widely used neural network architecture and show that the training of the end-to-end architecture suffers from normalization errors introduced by an average power constraint. To solve this issue, we propose a modified architecture: shifting the batch slicing after the normalization layer. This approach meets the normalization constraints better, especially in the case of small batch sizes. Finally, we experimentally demonstrate that our modified architecture leads to significantly improved performance of trained models, even for large batch sizes where normalization constraints are more easily met.
End-to-end mission performance simulators (E2ES) are suitable tools to accelerate satellite mission development from concet to deployment. One core element of these E2ES is the generation of synthetic scenes that are observed by the various instrumen
ts of an Earth Observation mission. The generation of these scenes rely on Radiative Transfer Models (RTM) for the simulation of light interaction with the Earth surface and atmosphere. However, the execution of advanced RTMs is impractical due to their large computation burden. Classical interpolation and statistical emulation methods of pre-computed Look-Up Tables (LUT) are therefore common practice to generate synthetic scenes in a reasonable time. This work evaluates the accuracy and computation cost of interpolation and emulation methods to sample the input LUT variable space. The results on MONDTRAN-based top-of-atmosphere radiance data show that Gaussian Process emulators produced more accurate output spectra than linear interpolation at a fraction of its time. It is concluded that emulation can function as a fast and more accurate alternative to interpolation for LUT parameter space sampling.