ﻻ يوجد ملخص باللغة العربية
We suggest and implement an approach for the bottom-up description of systems undergoing large-scale structural changes and chemical transformations from dynamic atomically resolved imaging data, where only partial or uncertain data on atomic positions are available. This approach is predicated on the synergy of two concepts, the parsimony of physical descriptors and general rotational invariance of non-crystalline solids, and is implemented using a rotationally-invariant extension of the variational autoencoder applied to semantically segmented atom-resolved data seeking the most effective reduced representation for the system that still contains the maximum amount of original information. This approach allowed us to explore the dynamic evolution of electron beam-induced processes in a silicon-doped graphene system, but it can be also applied for a much broader range of atomic-scale and mesoscopic phenomena to introduce the bottom-up order parameters and explore their dynamics with time and in response to external stimuli.
Recent advances in scanning tunneling and transmission electron microscopies (STM and STEM) have allowed routine generation of large volumes of imaging data containing information on the structure and functionality of materials. The experimental data
Manifold-valued data naturally arises in medical imaging. In cognitive neuroscience, for instance, brain connectomes base the analysis of coactivation patterns between different brain regions on the analysis of the correlations of their functional Ma
Controlling functionalities, such as magnetism or ferroelectricity, by means of oxygen vacancies (VO) is a key issue for the future development of transition metal oxides. Progress in this field is currently addressed through VO variations and their
A standard Variational Autoencoder, with a Euclidean latent space, is structurally incapable of capturing topological properties of certain datasets. To remove topological obstructions, we introduce Diffusion Variational Autoencoders with arbitrary m
Deep learning has enabled algorithms to generate realistic images. However, accurately predicting long video sequences requires understanding long-term dependencies and remains an open challenge. While existing video prediction models succeed at gene