ترغب بنشر مسار تعليمي؟ اضغط هنا

Models and Simulations in Material Science: Two Cases Without Error Bars

179   0   0.0 ( 0 )
 نشر من قبل Danny E. P. Vanpoucke Dr.
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We discuss two research projects in material science in which the results cannot be stated with an estimation of the error: a spectro- scopic ellipsometry study aimed at determining the orientation of DNA molecules on diamond and a scanning tunneling microscopy study of platinum-induced nanowires on germanium. To investigate the reliability of the results, we apply ideas from the philosophy of models in science. Even if the studies had reported an error value, the trustworthiness of the result would not depend on that value alone.



قيم البحث

اقرأ أيضاً

In machine learning (ML), it is in general challenging to provide a detailed explanation on how a trained model arrives at its prediction. Thus, usually we are left with a black-box, which from a scientific standpoint is not satisfactory. Even though numerous methods have been recently proposed to interpret ML models, somewhat surprisingly, interpretability in ML is far from being a consensual concept, with diverse and sometimes contrasting motivations for it. Reasonable candidate properties of interpretable models could be model transparency (i.e. how does the model work?) and post hoc explanations (i.e., what else can the model tell me?). Here, I review the current debate on ML interpretability and identify key challenges that are specific to ML applied to materials science.
The distribution of the geometric distances of connected neurons is a practical factor underlying neural networks in the brain. It can affect the brains dynamic properties at the ground level. Karbowski derived a power-law decay distribution that has not yet been verified by experiment. In this work, we check its validity using simulations with a phenomenological model. Based on the in vitro two-dimensional development of neural networks in culture vessels by Ito, we match the synapse number saturation time to obtain suitable parameters for the development process, then determine the distribution of distances between connected neurons under such conditions. Our simulations obtain a clear exponential distribution instead of a power-law one, which indicates that Karbowskis conclusion is invalid, at least for the case of in vitro neural network development in two-dimensional culture vessels.
Information and data exchange is an important aspect of scientific progress. In computational materials science, a prerequisite for smooth data exchange is standardization, which means using agreed conventions for, e.g., units, zero base lines, and f ile formats. There are two main strategies to achieve this goal. One accepts the heterogeneous nature of the community which comprises scientists from physics, chemistry, bio-physics, and materials science, by complying with the diverse ecosystem of computer codes and thus develops converters for the input and output files of all important codes. These converters then translate the data of all important codes into a standardized, code-independent format. The other strategy is to provide standardized open libraries that code developers can adopt for shaping their inputs, outputs, and restart files, directly into the same code-independent format. We like to emphasize in this paper that these two strategies can and should be regarded as complementary, if not even synergetic. The main concepts and software developments of both strategies are very much identical, and, obviously, both approaches should give the same final result. In this paper, we present the appropriate format and conventions that were agreed upon by two teams, the Electronic Structure Library (ESL) of CECAM and the NOMAD (NOvel MAterials Discovery) Laboratory, a European Centre of Excellence (CoE). This discussion includes also the definition of hierarchical metadata describing state-of-the-art electronic-structure calculations.
68 - S. Lombardo , F. Prada , E. Hugot 2020
We present here the Calar Alto Schmidt-Lemaitre Telescope (CASTLE) concept, a technology demonstrator for curved detectors, that will be installed at the Calar Alto Observatory (Spain). This telescope has a wide field of view (2.36x1.56 deg^2) and a design, optimised to generate a Point Spread Function with very low level wings and reduced ghost features, which makes it considerably less susceptible to several systematic effects usually affecting similar systems. These characteristics are particularly suited to study the low surface brightness Universe. CASTLE will be able to reach surface brightness orders of magnitude fainter than the sky background level and observe the extremely extended and faint features around galaxies such as tidal features, stellar halos, intra-cluster light, etc. CASTLE will also be used to search and detect astrophysical transients such as gamma ray bursts (GRB), gravitational wave optical counterparts, neutrino counterparts, etc. This will increase the number of precisely localized GRBs from 20% to 60% (in the case of Fermi/GMB GRBs).
Information regarding precipitate shapes is critical for estimating material parameters. Hence, we considered estimating a region of material parameter space in which a computational model produces precipitates having shapes similar to those observed in the experimental images. This region, called the lower-error region (LER), reflects intrinsic information of the material contained in the precipitate shapes. However, the computational cost of LER estimation can be high because the accurate computation of the model is required many times to better explore parameters. To overcome this difficulty, we used a Gaussian-process-based multifidelity modeling, in which training data can be sampled from multiple computations with different accuracy levels (fidelity). Lower-fidelity samples may have lower accuracy, but the computational cost is lower than that for higher-fidelity samples. Our proposed sampling procedure iteratively determines the most cost-effective pair of a point and a fidelity level for enhancing the accuracy of LER estimation. We demonstrated the efficiency of our method through estimation of the interface energy and lattice mismatch between MgZn2 and {alpha}-Mg phases in an Mg-based alloy. The results showed that the sampling cost required to obtain accurate LER estimation could be drastically reduced.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا