ترغب بنشر مسار تعليمي؟ اضغط هنا

High-fidelity Prediction of Megapixel Longitudinal Phase-space Images of Electron Beams using Encoder-Decoder Neural Networks

157   0   0.0 ( 0 )
 نشر من قبل Jun Zhu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Modeling of large-scale research facilities is extremely challenging due to complex physical processes and engineering problems. Here, we adopt a data-driven approach to model the longitudinal phase-space diagnostic beamline at the photoinector of the European XFEL with an encoder-decoder neural network model. A deep convolutional neural network (decoder) is used to build images measured on the screen from a small feature map generated by another neural network (encoder). We demonstrate that the model trained only with experimental data can make high-fidelity predictions of megapixel images for the longitudinal phase-space measurement without any prior knowledge of photoinjectors and electron beams. The prediction significantly outperforms existing methods. We also show the scalability and interpretability of the model by sharing the same decoder with more than one encoder used for different setups of the photoinjector, and propose a pragmatic way to model a facility with various diagnostics and working points. This opens the door to a new way of accurately modeling a photoinjector using neural networks and experimental data. The approach can possibly be extended to the whole accelerator and even other types of scientific facilities.


قيم البحث

اقرأ أيضاً

With the rising number of interconnected devices and sensors, modeling distributed sensor networks is of increasing interest. Recurrent neural networks (RNN) are considered particularly well suited for modeling sensory and streaming data. When predic ting future behavior, incorporating information from neighboring sensor stations is often beneficial. We propose a new RNN based architecture for context specific information fusion across multiple spatially distributed sensor stations. Hereby, latent representations of multiple local models, each modeling one sensor station, are jointed and weighted, according to their importance for the prediction. The particular importance is assessed depending on the current context using a separate attention function. We demonstrate the effectiveness of our model on three different real-world sensor network datasets.
To train deep convolutional neural networks, the input data and the intermediate activations need to be kept in memory to calculate the gradient descent step. Given the limited memory available in the current generation accelerator cards, this limits the maximum dimensions of the input data. We demonstrate a method to train convolutional neural networks holding only parts of the image in memory while giving equivalent results. We quantitatively compare this new way of training convolutional neural networks with conventional training. In addition, as a proof of concept, we train a convolutional neural network with 64 megapixel images, which requires 97% less memory than the conventional approach.
We study the calibration of several state of the art neural machine translation(NMT) systems built on attention-based encoder-decoder models. For structured outputs like in NMT, calibration is important not just for reliable confidence with predictio ns, but also for proper functioning of beam-search inference. We show that most modern NMT models are surprisingly miscalibrated even when conditioned on the true previous tokens. Our investigation leads to two main reasons -- severe miscalibration of EOS (end of sequence marker) and suppression of attention uncertainty. We design recalibration methods based on these signals and demonstrate improved accuracy, better sequence-level calibration, and more intuitive results from beam-search.
In this paper, we consider high-dimensional stationary processes where a new observation is generated from a compressed version of past observations. The specific evolution is modeled by an encoder-decoder structure. We estimate the evolution with an encoder-decoder neural network and give upper bounds for the expected forecast error under specific structural and sparsity assumptions. The results are shown separately for conditions either on the absolutely regular mixing coefficients or the functional dependence measure of the observed process. In a quantitative simulation we discuss the behavior of the network estimator under different model assumptions. We corroborate our theory by a real data example where we consider forecasting temperature data.
Predicting future video frames is extremely challenging, as there are many factors of variation that make up the dynamics of how frames change through time. Previously proposed solutions require complex inductive biases inside network architectures w ith highly specialized computation, including segmentation masks, optical flow, and foreground and background separation. In this work, we question if such handcrafted architectures are necessary and instead propose a different approach: finding minimal inductive bias for video prediction while maximizing network capacity. We investigate this question by performing the first large-scale empirical study and demonstrate state-of-the-art performance by learning large models on three different datasets: one for modeling object interactions, one for modeling human motion, and one for modeling car driving.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا