Do you want to publish a course? Click here

Closing the gap between atomic-scale lattice deformations and continuum elasticity

120   0   0.0 ( 0 )
 Added by Marco Salvalaglio
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Crystal lattice deformations can be described microscopically by explicitly accounting for the position of atoms or macroscopically by continuum elasticity. In this work, we report on the description of continuous elastic fields derived from an atomistic representation of crystalline structures that also include features typical of the microscopic scale. Analytic expressions for strain components are obtained from the complex amplitudes of the Fourier modes representing periodic lattice positions, which can be generally provided by atomistic modeling or experiments. The magnitude and phase of these amplitudes, together with the continuous description of strains, are able to characterize crystal rotations, lattice deformations, and dislocations. Moreover, combined with the so-called amplitude expansion of the phase-field crystal model, they provide a suitable tool for bridging microscopic to macroscopic scales. This study enables the in-depth analysis of elasticity effects for macro- and mesoscale systems taking microscopic details into account.



rate research

Read More

392 - Emmanuel Clouet 2008
The interaction of C atoms with a screw and an edge dislocation is modelled at an atomic scale using an empirical Fe-C interatomic potential based on the Embedded Atom Method (EAM) and molecular statics simulations. Results of atomic simulations are compared with predictions of elasticity theory. It is shown that a quantitative agreement can be obtained between both modelling techniques as long as anisotropic elastic calculations are performed and both the dilatation and the tetragonal distortion induced by the C interstitial are considered. Using isotropic elasticity allows to predict the main trends of the interaction and considering only the interstitial dilatation will lead to a wrong interaction.
A few years ago, the first CNN surpassed human performance on ImageNet. However, it soon became clear that machines lack robustness on more challenging test cases, a major obstacle towards deploying machines in the wild and towards obtaining better computational models of human visual perception. Here we ask: Are we making progress in closing the gap between human and machine vision? To answer this question, we tested human observers on a broad range of out-of-distribution (OOD) datasets, adding the missing human baseline by recording 85,120 psychophysical trials across 90 participants. We then investigated a range of promising machine learning developments that crucially deviate from standard supervised CNNs along three axes: objective function (self-supervised, adversarially trained, CLIP language-image training), architecture (e.g. vision transformers), and dataset size (ranging from 1M to 1B). Our findings are threefold. (1.) The longstanding robustness gap between humans and CNNs is closing, with the best models now matching or exceeding human performance on most OOD datasets. (2.) There is still a substantial image-level consistency gap, meaning that humans make different errors than models. In contrast, most models systematically agree in their categorisation errors, even substantially different ones like contrastive self-supervised vs. standard supervised models. (3.) In many cases, human-to-model consistency improves when training dataset size is increased by one to three orders of magnitude. Our results give reason for cautious optimism: While there is still much room for improvement, the behavioural difference between human and machine vision is narrowing. In order to measure future progress, 17 OOD datasets with image-level human behavioural data are provided as a benchmark here: https://github.com/bethgelab/model-vs-human/
We describe the proximity effect in a short disordered metallic junction between three superconducting leads. Andreev bound states in the multi-terminal junction may cross the Fermi level. We reveal that for a quasi-continuous metallic density of states, crossings at the Fermi level manifest as closing of the proximity-induced gap. We calculate the local density of states for a wide range of transport parameters using quantum circuit theory. The gap closes inside an area of the space spanned by the superconducting phase differences. We derive an approximate analytic expression for the boundary of the area and compare it to the full numerical solution. The size of the area increases with the transparency of the junction and is sensitive to asymmetry. The finite density of states at zero energy is unaffected by electron-hole decoherence present in the junction, although decoherence is important at higher energies. Our predictions can be tested using tunneling transport spectroscopy. To encourage experiments, we calculate the current-voltage characteristic in a typical measurement setup. We show how the structure of the local density of states can be mapped out from the measurement.
108 - Xingyang Ni , Esa Rahtu 2021
Since neural networks are data-hungry, incorporating data augmentation in training is a widely adopted technique that enlarges datasets and improves generalization. On the other hand, aggregating predictions of multiple augmented samples (i.e., test-time augmentation) could boost performance even further. In the context of person re-identification models, it is common practice to extract embeddings for both the original images and their horizontally flipped variants. The final representation is the mean of the aforementioned feature vectors. However, such scheme results in a gap between training and inference, i.e., the mean feature vectors calculated in inference are not part of the training pipeline. In this study, we devise the FlipReID structure with the flipping loss to address this issue. More specifically, models using the FlipReID structure are trained on the original images and the flipped images simultaneously, and incorporating the flipping loss minimizes the mean squared error between feature vectors of corresponding image pairs. Extensive experiments show that our method brings consistent improvements. In particular, we set a new record for MSMT17 which is the largest person re-identification dataset. The source code is available at https://github.com/nixingyang/FlipReID.
AI researchers employ not only the scientific method, but also methodology from mathematics and engineering. However, the use of the scientific method - specifically hypothesis testing - in AI is typically conducted in service of engineering objectives. Growing interest in topics such as fairness and algorithmic bias show that engineering-focused questions only comprise a subset of the important questions about AI systems. This results in the AI Knowledge Gap: the number of unique AI systems grows faster than the number of studies that characterize these systems behavior. To close this gap, we argue that the study of AI could benefit from the greater inclusion of researchers who are well positioned to formulate and test hypotheses about the behavior of AI systems. We examine the barriers preventing social and behavioral scientists from conducting such studies. Our diagnosis suggests that accelerating the scientific study of AI systems requires new incentives for academia and industry, mediated by new tools and institutions. To address these needs, we propose a two-sided marketplace called TuringBox. On one side, AI contributors upload existing and novel algorithms to be studied scientifically by others. On the other side, AI examiners develop and post machine intelligence tasks designed to evaluate and characterize algorithmic behavior. We discuss this markets potential to democratize the scientific study of AI behavior, and thus narrow the AI Knowledge Gap.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا