Do you want to publish a course? Click here

Computationally efficient optimization of radiation drives

52   0   0.0 ( 0 )
 Added by Damian Swift
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

For many applications of pulsed radiation, the time-history of the radiation intensity must be optimized to induce a desired time-history of conditions. This optimization is normally performed using multi-physics simulations of the system. The pulse shape is parametrized, and multiple simulations are performed in which the parameters are adjusted until the desired response is induced. These simulations are often computationally intensive, and the optimization by iteration of parameters in forward simulations is then expensive and slow. In many cases, the desired response can be expressed such that an instantaneous difference between the actual and desired response can be calculated. In principle, a computer program used to perform the forward simulation could be modified to adjust the instantaneous radiation drive automaticaly until the desired instantaneous response is achieved. Unfortunately, such modifications may be impracticable in a complicated multi-physics program. However, the computational time increment in such simulations is generally much shorter than the time scale of changes in the desired response. It is much more practicable to adjust the radiation source so that the response tends toward the desired value at later times. This relaxed in-situ optimization method can give an adequate design for a pulse shape in a single forward simulation, giving a typical gain in computational efficiency of tens to thousands. This approach was demonstrated for the design of laser pulse shapes to induce ramp loading to high pressure in target assemblies incorporating ablators of significantly different mechanical impedance than the sample, requiring complicated pulse shaping.



rate research

Read More

The radiation hydrodynamics equations for smoothed particle hydrodynamics are derived by operator splitting the radiation and hydrodynamics terms, including necessary terms for material motion, and discretizing each of the sets of equations separately in time and space. The implicit radiative transfer discussed in the first paper of this series is coupled to explicit smoothed particle hydrodynamics. The result is a multi-material meshless radiation hydrodynamics code with arbitrary opacities and equations of state that performs well for problems with significant material motion. The code converges with second-order accuracy in space and first-order accuracy in time to the semianalytic solution for the Lowrie radiative shock problem and has competitive performance compared to a mesh-based radiation hydrodynamics code for a multi-material problem in two dimensions and an ablation problem inspired by inertial confinement fusion in two and three dimensions.
160 - Seth Lloyd 2018
The quantum approximate optimization algorithm (QAOA) applies two Hamiltonians to a quantum system in alternation. The original goal of the algorithm was to drive the system close to the ground state of one of the Hamiltonians. This paper shows that the same alternating procedure can be used to perform universal quantum computation: the times for which the Hamiltonians are applied can be programmed to give a computationally universal dynamics. The Hamiltonians required can be as simple as homogeneous sums of single-qubit Pauli Xs and two-local ZZ Hamiltonians on a one-dimensional line of qubits.
99 - Ilya Loshchilov 2014
We propose a computationally efficient limited memory Covariance Matrix Adaptation Evolution Strategy for large scale optimization, which we call the LM-CMA-ES. The LM-CMA-ES is a stochastic, derivative-free algorithm for numerical optimization of non-linear, non-convex optimization problems in continuous domain. Inspired by the limited memory BFGS method of Liu and Nocedal (1989), the LM-CMA-ES samples candidate solutions according to a covariance matrix reproduced from $m$ direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows to reduce the time and memory complexity of the sampling to $O(mn)$, where $n$ is the number of decision variables. When $n$ is large (e.g., $n$ > 1000), even relatively small values of $m$ (e.g., $m=20,30$) are sufficient to efficiently solve fully non-separable problems and to reduce the overall run-time.
Image compression using neural networks have reached or exceeded non-neural methods (such as JPEG, WebP, BPG). While these networks are state of the art in ratedistortion performance, computational feasibility of these models remains a challenge. We apply automatic network optimization techniques to reduce the computational complexity of a popular architecture used in neural image compression, analyze the decoder complexity in execution runtime and explore the trade-offs between two distortion metrics, rate-distortion performance and run-time performance to design and research more computationally efficient neural image compression. We find that our method decreases the decoder run-time requirements by over 50% for a stateof-the-art neural architecture.
The challenge of assigning importance to individual neurons in a network is of interest when interpreting deep learning models. In recent work, Dhamdhere et al. proposed Total Conductance, a natural refinement of Integrated Gradients for attributing importance to internal neurons. Unfortunately, the authors found that calculating conductance in tensorflow required the addition of several custom gradient operators and did not scale well. In this work, we show that the formula for Total Conductance is mathematically equivalent to Path Integrated Gradients computed on a hidden layer in the network. We provide a scalable implementation of Total Conductance using standard tensorflow gradient operators that we call Neuron Integrated Gradients. We compare Neuron Integrated Gradients to DeepLIFT, a pre-existing computationally efficient approach that is applicable to calculating internal neuron importance. We find that DeepLIFT produces strong empirical results and is faster to compute, but because it lacks the theoretical properties of Neuron Integrated Gradients, it may not always be preferred in practice. Colab notebook reproducing results: http://bit.ly/neuronintegratedgradients
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا