ﻻ يوجد ملخص باللغة العربية
Modeling of large-scale research facilities is extremely challenging due to complex physical processes and engineering problems. Here, we adopt a data-driven approach to model the longitudinal phase-space diagnostic beamline at the photoinector of the European XFEL with an encoder-decoder neural network model. A deep convolutional neural network (decoder) is used to build images measured on the screen from a small feature map generated by another neural network (encoder). We demonstrate that the model trained only with experimental data can make high-fidelity predictions of megapixel images for the longitudinal phase-space measurement without any prior knowledge of photoinjectors and electron beams. The prediction significantly outperforms existing methods. We also show the scalability and interpretability of the model by sharing the same decoder with more than one encoder used for different setups of the photoinjector, and propose a pragmatic way to model a facility with various diagnostics and working points. This opens the door to a new way of accurately modeling a photoinjector using neural networks and experimental data. The approach can possibly be extended to the whole accelerator and even other types of scientific facilities.
With the rising number of interconnected devices and sensors, modeling distributed sensor networks is of increasing interest. Recurrent neural networks (RNN) are considered particularly well suited for modeling sensory and streaming data. When predic
To train deep convolutional neural networks, the input data and the intermediate activations need to be kept in memory to calculate the gradient descent step. Given the limited memory available in the current generation accelerator cards, this limits
We study the calibration of several state of the art neural machine translation(NMT) systems built on attention-based encoder-decoder models. For structured outputs like in NMT, calibration is important not just for reliable confidence with predictio
In this paper, we consider high-dimensional stationary processes where a new observation is generated from a compressed version of past observations. The specific evolution is modeled by an encoder-decoder structure. We estimate the evolution with an
Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time. Previously proposed solutions require complex inductive biases inside network architectures w