ترغب بنشر مسار تعليمي؟ اضغط هنا

Invertible Surrogate Models: Joint surrogate modelling and reconstruction of Laser-Wakefield Acceleration by invertible neural networks

73   0   0.0 ( 0 )
 نشر من قبل Friedrich Bethke
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Invertible neural networks are a recent technique in machine learning promising neural network architectures that can be run in forward and reverse mode. In this paper, we will be introducing invertible surrogate models that approximate complex forward simulation of the physics involved in laser plasma accelerators: iLWFA. The bijective design of the surrogate model also provides all means for reconstruction of experimentally acquired diagnostics. The quality of our invertible laser wakefield acceleration network will be verified on a large set of numerical LWFA simulations.



قيم البحث

اقرأ أيضاً

Simulations of high energy density physics are expensive in terms of computational resources. In particular, the computation of opacities of plasmas, which are needed to accurately compute radiation transport in the non-local thermal equilibrium (NLT E) regime, are expensive to the point of easily requiring multiple times the sum-total compute time of all other components of the simulation. As such, there is great interest in finding ways to accelerate NLTE computations. Previous work has demonstrated that a combination of fully-connected autoencoders and a deep jointly-informed neural network (DJINN) can successfully replace the standard NLTE calculations for the opacity of krypton. This work expands this idea to multiple elements in demonstrating that individual surrogate models can be also be generated for other elements with the focus being on creating autoencoders that can accurately encode and decode the absorptivity and emissivity spectra. Furthermore, this work shows that multiple elements across a large range of atomic numbers can be combined into a single autoencoder when using a convolutional autoencoder while maintaining accuracy that is comparable to individual fully-connected autoencoders. Lastly, it is demonstrated that DJINN can effectively learn the latent space of a convolutional autoencoder that can encode multiple elements allowing the combination to effectively function as a surrogate model.
The ability to readily design novel materials with chosen functional properties on-demand represents a next frontier in materials discovery. However, thoroughly and efficiently sampling the entire design space in a computationally tractable manner re mains a highly challenging task. To tackle this problem, we propose an inverse design framework (MatDesINNe) utilizing invertible neural networks which can map both forward and reverse processes between the design space and target property. This approach can be used to generate materials candidates for a designated property, thereby satisfying the highly sought-after goal of inverse design. We then apply this framework to the task of band gap engineering in two-dimensional materials, starting with MoS2. Within the design space encompassing six degrees of freedom in applied tensile, compressive and shear strain plus an external electric field, we show the framework can generate novel, high fidelity, and diverse candidates with near-chemical accuracy. We extend this generative capability further to provide insights regarding metal-insulator transition, important for memristive neuromorphic applications among others, in MoS2 which is not otherwise possible with brute force screening. This approach is general and can be directly extended to other materials and their corresponding design spaces and target properties.
152 - Yukun Yang 2020
Spiking neural networks (SNN) are usually more energy-efficient as compared to Artificial neural networks (ANN), and the way they work has a great similarity with our brain. Back-propagation (BP) has shown its strong power in training ANN in recent y ears. However, since spike behavior is non-differentiable, BP cannot be applied to SNN directly. Although prior works demonstrated several ways to approximate the BP-gradient in both spatial and temporal directions either through surrogate gradient or randomness, they omitted the temporal dependency introduced by the reset mechanism between each step. In this article, we target on theoretical completion and investigate the effect of the missing term thoroughly. By adding the temporal dependency of the reset mechanism, the new algorithm is more robust to learning-rate adjustments on a toy dataset but does not show much improvement on larger learning tasks like CIFAR-10. Empirically speaking, the benefits of the missing term are not worth the additional computational overhead. In many cases, the missing term can be ignored.
We show that standard ResNet architectures can be made invertible, allowing the same model to be used for classification, density estimation, and generation. Typically, enforcing invertibility requires partitioning dimensions or restricting network a rchitectures. In contrast, our approach only requires adding a simple normalization step during training, already available in standard frameworks. Invertible ResNets define a generative model which can be trained by maximum likelihood on unlabeled data. To compute likelihoods, we introduce a tractable approximation to the Jacobian log-determinant of a residual block. Our empirical evaluation shows that invertible ResNets perform competitively with both state-of-the-art image classifiers and flow-based generative models, something that has not been previously achieved with a single architecture.
Adversarial examples (AEs) are images that can mislead deep neural network (DNN) classifiers via introducing slight perturbations into original images. This security vulnerability has led to vast research in recent years because it can introduce real -world threats into systems that rely on neural networks. Yet, a deep understanding of the characteristics of adversarial examples has remained elusive. We propose a new way of achieving such understanding through a recent development, namely, invertible neural models with Lipschitz continuous mapping functions from the input to the output. With the ability to invert any latent representation back to its corresponding input image, we can investigate adversarial examples at a deeper level and disentangle the adversarial examples latent representation. Given this new perspective, we propose a fast latent space adversarial example generation method that could accelerate adversarial training. Moreover, this new perspective could contribute to new ways of adversarial example detection.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا