No Arabic abstract
Recent advances in acquisition equipment is providing experiments with growing amounts of precise yet affordable sensors. At the same time an improved computational power, coming from new hardware resources (GPU, FPGA, ACAP), has been made available at relatively low costs. This led us to explore the possibility of completely renewing the chain of acquisition for a fusion experiment, where many high-rate sources of data, coming from different diagnostics, can be combined in a wide framework of algorithms. If on one hand adding new data sources with different diagnostics enriches our knowledge about physical aspects, on the other hand the dimensions of the overall model grow, making relations among variables more and more opaque. A new approach for the integration of such heterogeneous diagnostics, based on composition of deep variational autoencoders, could ease this problem, acting as a structural sparse regularizer. This has been applied to RFX-mod experiment data, integrating the soft X-ray linear images of plasma temperature with the magnetic state. However to ensure a real-time signal analysis, those algorithmic techniques must be adapted to run in well suited hardware. In particular it is shown that, attempting a quantization of neurons transfer functions, such models can be modified to create an embedded firmware. This firmware, approximating the deep inference model to a set of simple operations, fits well with the simple logic units that are largely abundant in FPGAs. This is the key factor that permits the use of affordable hardware with complex deep neural topology and operates them in real-time.
Data integration has been studied extensively for decades and approached from different angles. However, this domain still remains largely rule-driven and lacks universal automation. Recent development in machine learning and in particular deep learning has opened the way to more general and more efficient solutions to data integration problems. In this work, we propose a general approach to modeling and integrating entities from structured data, such as relational databases, as well as unstructured sources, such as free text from news articles. Our approach is designed to explicitly model and leverage relations between entities, thereby using all available information and preserving as much context as possible. This is achieved by combining siamese and graph neural networks to propagate information between connected entities and support high scalability. We evaluate our method on the task of integrating data about business entities, and we demonstrate that it outperforms standard rule-based systems, as well as other deep learning approaches that do not use graph-based representations.
Adjoint-based optimization methods are attractive for aerodynamic shape design primarily due to their computational costs being independent of the dimensionality of the input space and their ability to generate high-fidelity gradients that can then be used in a gradient-based optimizer. This makes them very well suited for high-fidelity simulation based aerodynamic shape optimization of highly parametrized geometries such as aircraft wings. However, the development of adjoint-based solvers involve careful mathematical treatment and their implementation require detailed software development. Furthermore, they can become prohibitively expensive when multiple optimization problems are being solved, each requiring multiple restarts to circumvent local optima. In this work, we propose a machine learning enabled, surrogate-based framework that replaces the expensive adjoint solver, without compromising on predicting predictive accuracy. Specifically, we first train a deep neural network (DNN) from training data generated from evaluating the high-fidelity simulation model on a model-agnostic, design of experiments on the geometry shape parameters. The optimum shape may then be computed by using a gradient-based optimizer coupled with the trained DNN. Subsequently, we also perform a gradient-free Bayesian optimization, where the trained DNN is used as the prior mean. We observe that the latter framework (DNN-BO) improves upon the DNN-only based optimization strategy for the same computational cost. Overall, this framework predicts the true optimum with very high accuracy, while requiring far fewer high-fidelity function calls compared to the adjoint-based method. Furthermore, we show that multiple optimization problems can be solved with the same machine learning model with high accuracy, to amortize the offline costs associated with constructing our models.
The Cosmic Microwave Background (CMB) has been measured over a wide range of multipoles. Experiments with arc-minute resolution like the Atacama Cosmology Telescope (ACT) have contributed to the measurement of primary and secondary anisotropies, leading to remarkable scientific discoveries. Such findings require careful data selection in order to remove poorly-behaved detectors and unwanted contaminants. The current data classification methodology used by ACT relies on several statistical parameters that are assessed and fine-tuned by an expert. This method is highly time-consuming and band or season-specific, which makes it less scalable and efficient for future CMB experiments. In this work, we propose a supervised machine learning model to classify detectors of CMB experiments. The model corresponds to a deep convolutional neural network. We tested our method on real ACT data, using the 2008 season, 148 GHz, as training set with labels provided by the ACT data selection software. The model learns to classify time-streams starting directly from the raw data. For the season and frequency considered during the training, we find that our classifier reaches a precision of 99.8%. For 220 and 280 GHz data, season 2008, we obtained 99.4% and 97.5% of precision, respectively. Finally, we performed a cross-season test over 148 GHz data from 2009 and 2010 for which our model reaches a precision of 99.8% and 99.5%, respectively. Our model is about 10x faster than the current pipeline, making it potentially suitable for real-time implementations.
The adoption of intelligent systems with Artificial Neural Networks (ANNs) embedded in hardware for real-time applications currently faces a growing demand in fields like the Internet of Things (IoT) and Machine to Machine (M2M). However, the application of ANNs in this type of system poses a significant challenge due to the high computational power required to process its basic operations. This paper aims to show an implementation strategy of a Multilayer Perceptron (MLP) type neural network, in a microcontroller (a low-cost, low-power platform). A modular matrix-based MLP with the full classification process was implemented, and also the backpropagation training in the microcontroller. The testing and validation were performed through Hardware in the Loop (HIL) of the Mean Squared Error (MSE) of the training process, classification result, and the processing time of each implementation module. The results revealed a linear relationship between the values of the hyperparameters and the processing time required for classification, also the processing time concurs with the required time for many applications on the fields mentioned above. These findings show that this implementation strategy and this platform can be applied successfully on real-time applications that require the capabilities of ANNs.
The internal states of most deep neural networks are difficult to interpret, which makes diagnosis and debugging during training challenging. Activation maximization methods are widely used, but lead to multiple optima and are hard to interpret (appear noise-like) for complex neurons. Image-based methods use maximally-activating image regions which are easier to interpret, but do not provide pixel-level insight into why the neuron responds to them. In this work we introduce an MCMC method: Langevin Dynamics Activation Maximization (LDAM), which is designed for diagnostic visualization. LDAM provides two affordances in combination: the ability to explore the set of maximally activating pre-images, and the ability to trade-off interpretability and pixel-level accuracy using a GAN-style discriminator as a regularizer. We present case studies on MNIST, CIFAR and ImageNet datasets exploring these trade-offs. Finally we show that diagnostic visualization using LDAM leads to a novel insight into the parameter averaging method for deep net training.