ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Regression of the Tritium Breeding Ratio in Fusion Reactors

206   0   0.0 ( 0 )
 نشر من قبل Petr M\\'anek
 تاريخ النشر 2021
والبحث باللغة English
 تأليف Petr Manek




اسأل ChatGPT حول البحث

The tritium breeding ratio (TBR) is an essential quantity for the design of modern and next-generation D-T fueled nuclear fusion reactors. Representing the ratio between tritium fuel generated in breeding blankets and fuel consumed during reactor runtime, the TBR depends on reactor geometry and material properties in a complex manner. In this work, we explored the training of surrogate models to produce a cheap but high-quality approximation for a Monte Carlo TBR model in use at the UK Atomic Energy Authority. We investigated possibilities for dimensional reduction of its feature space, reviewed 9 families of surrogate models for potential applicability, and performed hyperparameter optimisation. Here we present the performance and scaling properties of these models, the fastest of which, an artificial neural network, demonstrated $R^2=0.985$ and a mean prediction time of $0.898 mumathrm{s}$, representing a relative speedup of $8cdot 10^6$ with respect to the expensive MC model. We further present a novel adaptive sampling algorithm, Quality-Adaptive Surrogate Sampling, capable of interfacing with any of the individually studied surrogates. Our preliminary testing on a toy TBR theory has demonstrated the efficacy of this algorithm for accelerating the surrogate modelling process.



قيم البحث

اقرأ أيضاً

Modelling neutral beam injection (NBI) in fusion reactors requires computing the trajectories of large ensembles of particles. Slowing down times of up to one second combined with nanosecond time steps make these simulations computationally very cost ly. This paper explores the performance of BGSDC, a new numerical time stepping method, for tracking ions generated by NBI in the DIII-D and JET reactors. BGSDC is a high-order generalisation of the Boris method, combining it with spectral deferred corrections and the Generalized Minimal Residual method GMRES. Without collision modelling, where numerical drift can be quantified accurately, we find that BGSDC can deliver higher quality particle distributions than the standard Boris integrator at comparable cost or comparable distributions at lower cost. With collision models, quantifying accuracy is difficult but we show that BGSDC produces stable distributions at larger time steps than Boris.
Reliable seed yield estimation is an indispensable step in plant breeding programs geared towards cultivar development in major row crops. The objective of this study is to develop a machine learning (ML) approach adept at soybean [textit{Glycine max } L. (Merr.)] pod counting to enable genotype seed yield rank prediction from in-field video data collected by a ground robot. To meet this goal, we developed a multi-view image-based yield estimation framework utilizing deep learning architectures. Plant images captured from different angles were fused to estimate the yield and subsequently to rank soybean genotypes for application in breeding decisions. We used data from controlled imaging environment in field, as well as from plant breeding test plots in field to demonstrate the efficacy of our framework via comparing performance with manual pod counting and yield estimation. Our results demonstrate the promise of ML models in making breeding decisions with significant reduction of time and human effort, and opening new breeding methods avenues to develop cultivars.
186 - M. Osipenko , M. Ripani , G. Ricco 2015
In this paper we describe the development and first tests of a neutron spectrometer designed for high flux environments, such as the ones found in fast nuclear reactors. The spectrometer is based on the conversion of neutrons impinging on $^6$Li into $alpha$ and $t$ whose total energy comprises the initial neutron energy and the reaction $Q$-value. The $^6$LiF layer is sandwiched between two CVD diamond detectors, which measure the two reaction products in coincidence. The spectrometer was calibrated at two neutron energies in well known thermal and 3 MeV neutron fluxes. The measured neutron detection efficiency varies from 4.2$times 10^{-4}$ to 3.5$times 10^{-8}$ for thermal and 3 MeV neutrons, respectively. These values are in agreement with Geant4 simulations and close to simple estimates based on the knowledge of the $^6$Li(n,$alpha$)$t$ cross section. The energy resolution of the spectrometer was found to be better than 100 keV when using 5 m cables between the detector and the preamplifiers.
The neutron yields observed in inertial confinement fusion experiments for higher convergence ratios are about two orders of magnitude smaller than the neutron yields predicted by one-dimensional models, the discrepancy being attributed to the develo pment of instabilities. We consider the possibility that ignition and a moderate gain could be achieved with existing laser facilities if the laser driver energy is used to produce only the radial compression of the fuel capsule to high densities but relatively low temperatures, while the ignition of the fusion reactions in the compressed fuel capsule will be effected by a synchronized hypervelocity impact. A positively-charged incident projectile can be accelerated to velocities of 3.5 x 10^6 m/s, resulting in ignition temperatures of about 4 keV, by a conventional low-beta linac having a length of 13 km if deuterium-tritium densities of 570 g/cm^3 could be obtained by laser-driven compression.
Materials discovery is crucial for making scientific advances in many domains. Collections of data from experiments and first-principle computations have spurred interest in applying machine learning methods to create predictive models capable of map ping from composition and crystal structures to materials properties. Generally, these are regression problems with the input being a 1D vector composed of numerical attributes representing the material composition and/or crystal structure. While neural networks consisting of fully connected layers have been applied to such problems, their performance often suffers from the vanishing gradient problem when network depth is increased. In this paper, we study and propose design principles for building deep regression networks composed of fully connected layers with numerical vectors as input. We introduce a novel deep regression network with individual residual learning, IRNet, that places shortcut connections after each layer so that each layer learns the residual mapping between its output and input. We use the problem of learning properties of inorganic materials from numerical attributes derived from material composition and/or crystal structure to compare IRNets performance against that of other machine learning techniques. Using multiple datasets from the Open Quantum Materials Database (OQMD) and Materials Project for training and evaluation, we show that IRNet provides significantly better prediction performance than the state-of-the-art machine learning approaches currently used by domain scientists. We also show that IRNets use of individual residual learning leads to better convergence during the training phase than when shortcut connections are between multi-layer stacks while maintaining the same number of parameters.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا