No Arabic abstract
Deep neural networks (deep learning) have emerged as a technology of choice to tackle problems in natural language processing, computer vision, speech recognition and gameplay, and in just a few years has led to superhuman level performance and ushered in a new wave of AI. Buoyed by these successes, researchers in the physical sciences have made steady progress in incorporating deep learning into their respective domains. However, such adoption brings substantial challenges that need to be recognized and confronted. Here, we discuss both opportunities and roadblocks to implementation of deep learning within materials science, focusing on the relationship between correlative nature of machine learning and causal hypothesis driven nature of physical sciences. We argue that deep learning and AI are now well positioned to revolutionize fields where causal links are known, as is the case for applications in theory. When confounding factors are frozen or change only weakly, this leaves open the pathway for effective deep learning solutions in experimental domains. Similarly, these methods offer a pathway towards understanding the physics of real-world systems, either via deriving reduced representations, deducing algorithmic complexity, or recovering generative physical models. However, extending deep learning and AI for models with unclear causal relationship can produce misleading and potentially incorrect results. Here, we argue the broad adoption of Bayesian methods incorporating prior knowledge, development of DL solutions with incorporated physical constraints, and ultimately adoption of causal models, offers a path forward for fundamental and applied research. Most notably, while these advances can change the way science is carried out in ways we cannot imagine, machine learning is not going to substitute science any time soon.
Statistical models of natural stimuli provide an important tool for researchers in the fields of machine learning and computational neuroscience. A canonical way to quantitatively assess and compare the performance of statistical models is given by the likelihood. One class of statistical models which has recently gained increasing popularity and has been applied to a variety of complex data are deep belief networks. Analyses of these models, however, have been typically limited to qualitative analyses based on samples due to the computationally intractable nature of the model likelihood. Motivated by these circumstances, the present article provides a consistent estimator for the likelihood that is both computationally tractable and simple to apply in practice. Using this estimator, a deep belief network which has been suggested for the modeling of natural image patches is quantitatively investigated and compared to other models of natural image patches. Contrary to earlier claims based on qualitative results, the results presented in this article provide evidence that the model under investigation is not a particularly good model for natural images
Pulsar glitches are traditionally viewed as a manifestation of vortex dynamics associated with a neutron superfluid reservoir confined to the inner crust of the star. In this Letter we show that the non-dissipative entrainment coupling between the neutron superfluid and the nuclear lattice leads to a less mobile crust superfluid, effectively reducing the moment of inertia associated with the angular momentum reservoir. Combining the latest observational data for prolific glitching pulsars with theoretical results for the crust entrainment we find that the required superfluid reservoir exceeds that available in the crust. This challenges our understanding of the glitch phenomenon, and we discuss possible resolutions to the problem.
Many structured prediction tasks in machine vision have a collection of acceptable answers, instead of one definitive ground truth answer. Segmentation of images, for example, is subject to human labeling bias. Similarly, there are multiple possible pixel values that could plausibly complete occluded image regions. State-of-the art supervised learning methods are typically optimized to make a single test-time prediction for each query, failing to find other modes in the output space. Existing methods that allow for sampling often sacrifice speed or accuracy. We introduce a simple method for training a neural network, which enables diverse structured predictions to be made for each test-time query. For a single input, we learn to predict a range of possible answers. We compare favorably to methods that seek diversity through an ensemble of networks. Such stochastic multiple choice learning faces mode collapse, where one or more ensemble members fail to receive any training signal. Our best performing solution can be deployed for various tasks, and just involves small modifications to the existing single-mode architecture, loss function, and training regime. We demonstrate that our method results in quantitative improvements across three challenging tasks: 2D image completion, 3D volume estimation, and flow prediction.
Dynamics of protein self-assembly on the inorganic surface and the resultant geometric patterns are visualized using high-speed atomic force microscopy. The time dynamics of the classical macroscopic descriptors such as 2D Fast Fourier Transforms (FFT), correlation and pair distribution function are explored using the unsupervised linear unmixing, demonstrating the presence of static ordered and dynamic disordered phases and establishing their time dynamics. The deep learning (DL)-based workflow is developed to analyze detailed particle dynamics on the particle-by-particle level. Beyond the macroscopic descriptors, we utilize the knowledge of local particle geometries and configurations to explore the evolution of local geometries and reconstruct the interaction potential between the particles. Finally, we use the machine learning-based feature extraction to define particle neighborhood free of physics constraints. This approach allowed separating the possible classes of particle behavior, identify the associated transition probabilities, and further extend this analysis to identify slow modes and associated configurations, allowing for systematic exploration and predictive modeling of the time dynamics of the system. Overall, this work establishes the DL based workflow for the analysis of the self-organization processes in complex systems from observational data and provides insight into the fundamental mechanisms.
Modern high-resolution microscopes, such as the scanning tunneling microscope, are commonly used to study specimens that have dense and aperiodic spatial structure. Extracting meaningful information from images obtained from such microscopes remains a formidable challenge. Fourier analysis is commonly used to analyze the underlying structure of fundamental motifs present in an image. However, the Fourier transform fundamentally suffers from severe phase noise when applied to aperiodic images. Here, we report the development of a new algorithm based on nonconvex optimization, applicable to any microscopy modality, that directly uncovers the fundamental motifs present in a real-space image. Apart from being quantitatively superior to traditional Fourier analysis, we show that this novel algorithm also uncovers phase sensitive information about the underlying motif structure. We demonstrate its usefulness by studying scanning tunneling microscopy images of a Co-doped iron arsenide superconductor and prove that the application of the algorithm allows for the complete recovery of quasiparticle interference in this material. Our phase sensitive quasiparticle interference imaging results indicate that the pairing symmetry in optimally doped NaFeAs is consistent with a sign-changing s+- order parameter.