No Arabic abstract
Plasma tomography consists in reconstructing the 2D radiation profile in a poloidal cross-section of a fusion device, based on line-integrated measurements along several lines of sight. The reconstruction process is computationally intensive and, in practice, only a few reconstructions are usually computed per pulse. In this work, we trained a deep neural network based on a large collection of sample tomograms that have been produced at JET over several years. Once trained, the network is able to reproduce those results with high accuracy. More importantly, it can compute all the tomographic reconstructions for a given pulse in just a few seconds. This makes it possible to visualize several phenomena -- such as plasma heating, disruptions and impurity transport -- over the course of a discharge.
Tomographic image reconstruction with deep learning is an emerging field, but a recent landmark study reveals that several deep reconstruction networks are unstable for computed tomography (CT) and magnetic resonance imaging (MRI). Specifically, three kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missing in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. On the other hand, compressed sensing (CS) inspired reconstruction methods do not suffer from these instabilities because of their built-in kernel awareness. For deep reconstruction to realize its full potential and become a mainstream approach for tomographic imaging, it is thus critically important to meet this challenge by stabilizing deep reconstruction networks. Here we propose an Analytic Compressed Iterative Deep (ACID) framework to address this challenge. ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the deep reconstruction using ACID is accurate and stable, and sheds light on the converging mechanism of the ACID iteration under a Bounded Relative Error Norm (BREN) condition. In particular, the study shows that ACID-based reconstruction is resilient against adversarial attacks, superior to classic sparsity-regularized reconstruction alone, and eliminates the three kinds of instabilities. We anticipate that this integrative data-driven approach will help promote development and translation of deep tomographic image reconstruction networks into clinical applications.
In many applications, it is important to reconstruct a fluid flow field, or some other high-dimensional state, from limited measurements and limited data. In this work, we propose a shallow neural network-based learning methodology for such fluid flow reconstruction. Our approach learns an end-to-end mapping between the sensor measurements and the high-dimensional fluid flow field, without any heavy preprocessing on the raw data. No prior knowledge is assumed to be available, and the estimation method is purely data-driven. We demonstrate the performance on three examples in fluid mechanics and oceanography, showing that this modern data-driven approach outperforms traditional modal approximation techniques which are commonly used for flow reconstruction. Not only does the proposed method show superior performance characteristics, it can also produce a comparable level of performance with traditional methods in the area, using significantly fewer sensors. Thus, the mathematical architecture is ideal for emerging global monitoring technologies where measurement data are often limited.
Electron tomography has achieved higher resolution and quality at reduced doses with recent advances in compressed sensing. Compressed sensing (CS) theory exploits the inherent sparse signal structure to efficiently reconstruct three-dimensional (3D) volumes at the nanoscale from undersampled measurements. However, the process bottlenecks 3D reconstruction with computation times that run from hours to days. Here we demonstrate a framework for dynamic compressed sensing that produces a 3D specimen structure that updates in real-time as new specimen projections are collected. Researchers can begin interpreting 3D specimens as data is collected to facilitate high-throughput and interactive analysis. Using scanning transmission electron microscopy (STEM), we show that dynamic compressed sensing accelerates the convergence speed by 3-fold while also reducing its error by 27% for an Au/SrTiO3 nanoparticle specimen. Before a tomography experiment is completed, the 3D tomogram has interpretable structure within 33% of completion and fine details are visible as early as 66%. Upon completion of an experiment, a high-fidelity 3D visualization is produced without further delay. Additionally, reconstruction parameters that tune data fidelity can be manipulated throughout the computation without rerunning the entire process.
Machine learning algorithms have been available since the 1990s, but it is much more recently that they have come into use also in the physical sciences. While these algorithms have already proven to be useful in uncovering new properties of materials and in simplifying experimental protocols, their usage in liquid crystals research is still limited. This is surprising because optical imaging techniques are often applied in this line of research, and it is precisely with images that machine learning algorithms have achieved major breakthroughs in recent years. Here we use convolutional neural networks to probe several properties of liquid crystals directly from their optical images and without using manual feature engineering. By optimizing simple architectures, we find that convolutional neural networks can predict physical properties of liquid crystals with exceptional accuracy. We show that these deep neural networks identify liquid crystal phases and predict the order parameter of simulated nematic liquid crystals almost perfectly. We also show that convolutional neural networks identify the pitch length of simulated samples of cholesteric liquid crystals and the sample temperature of an experimental liquid crystal with very high precision.
A novel method, utilizing convolutional neural networks (CNNs), is proposed to reconstruct hyperspectral cubes from computed tomography imaging spectrometer (CTIS) images. Current reconstruction algorithms are usually subject to long reconstruction times and mediocre precision in cases of a large number of spectral channels. The constructed CNNs deliver higher precision and shorter reconstruction time than a standard expectation maximization algorithm. In addition, the network can handle two different types of real-world images at the same time -- specifically ColorChecker and carrot spectral images are considered. This work paves the way toward real-time reconstruction of hyperspectral cubes from CTIS images.