ترغب بنشر مسار تعليمي؟ اضغط هنا

One size does not fit all: Evidence for a range of mixing efficiencies in stellar evolution calculations

122   0   0.0 ( 0 )
 نشر من قبل Cole Johnston
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Context: Internal chemical mixing in intermediate- and high-mass stars represents an immense uncertainty in stellar evolution models.In addition to extending the main-sequence lifetime, chemical mixing also appreciably increases the mass of the stellar core. Several studies have made attempts to calibrate the efficiency of different convective boundary mixing mechanisms, with sometimes seemingly conflicting results. Aims: We aim to demonstrate that stellar models regularly under-predict the masses of convective stellar cores. Methods: We gather convective core mass and fractional core hydrogen content inferences from numerous independent binary and asteroseismic studies, and compare them to stellar evolution models computed with the MESA stellar evolution code. Results: We demonstrate that core mass inferences from the literature are ubiquitously more massive than predicted by stellar evolution models without or with little convective boundary mixing. Conclusions: Independent of the form of internal mixing, stellar models require an efficient mixing mechanism that produces more massive cores throughout the main sequence to reproduce high-precision observations. This has implications for the post-main sequence evolution of all stars which have a well developed convective core on the main sequence.



قيم البحث

اقرأ أيضاً

As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affecte d citizens, government regulators, domain experts, or system developers, present different requirements for explanations. Toward addressing these needs, we introduce AI Explainability 360 (http://aix360.mybluemix.net/), an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics. Equally important, we provide a taxonomy to help entities requiring explanations to navigate the space of explanation methods, not only those in the toolkit but also in the broader literature on explainability. For data scientists and other users of the toolkit, we have implemented an extensible software architecture that organizes methods according to their place in the AI modeling pipeline. We also discuss enhancements to bring research innovations closer to consumers of explanations, ranging from simplified, more accessibl
Exploratory testing (ET) is a powerful and efficient way of testing software by integrating design, execution, and analysis of tests during a testing session. ET is often contrasted with scripted testing, and seen as a choice between black and white. We pose that there are different levels of exploratory testing from fully exploratory to fully scripted and propose a scale for the degree of exploration for ET. The degree is defined through levels of ET, which correspond to the way test charters are formulated. We have evaluated the classification through focus groups at four companies and identified factors that influence the level of exploratory testing. The results show that the proposed ET levels have distinguishing characteristics and that the levels can be used as a guide to structure test charters. Our study also indicates that applying a combination of ET levels can be beneficial in achieving effective testing.
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important featur es for AI models prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the same underlying biomedical phenomenon. Understanding such modality-dependent features is essential for clinical users interpretation of AI decisions. To tackle this clinically important but technically ignored problem, we propose the MSFI (Modality-Specific Feature Importance) metric to examine whether saliency maps can highlight modality-specific important features. MSFI encodes the clinical requirements on modality prioritization and modality-specific feature localization. Our evaluations on 16 commonly used saliency map methods, including a clinician user study, show that although most saliency map methods captured modality importance information in general, most of them failed to highlight modality-specific important features consistently and precisely. The evaluation results guide the choices of saliency map methods and provide insights to propose new ones targeting clinical applications.
Todays cloud service architectures follow a one size fits all deployment strategy where the same service version instantiation is provided to the end users. However, consumers are broad and different applications have different accuracy and responsiv eness requirements, which as we demonstrate renders the one size fits all approach inefficient in practice. We use a production-grade speech recognition engine, which serves several thousands of users, and an open source computer vision based system, to explain our point. To overcome the limitations of the one size fits all approach, we recommend Tolerance Tiers where each MLaaS tier exposes an accuracy/responsiveness characteristic, and consumers can programmatically select a tier. We evaluate our proposal on the CPU-based automatic speech recognition (ASR) engine and cutting-edge neural networks for image classification deployed on both CPUs and GPUs. The results show that our proposed approach provides an MLaaS cloud service architecture that can be tuned by the end API user or consumer to outperform the conventional one size fits all approach.
Context. Stellar spectral synthesis is essential for various applications, ranging from determining stellar parameters to comprehensive stellar variability calculations. New observational resources as well as advanced stellar atmosphere modelling, ta king three dimensional (3D) effects from radiative magnetohydrodynamics calculations into account, require a more efficient radiative transfer. Aims. For accurate, fast and flexible calculations of opacity distribution functions (ODFs), stellar atmospheres and stellar spectra we developed an efficient code building on the well-established ATLAS9 code. The new code also paves the way for an easy and fast access to different elemental compositions in stellar calculations. Methods. For the generation of ODF tables we further developed the well-established DFSYNTHE code by implementing additional functionality, and a speed-up by employing a parallel computation scheme. In addition, the line lists used can be changed from Kuruczs recent lists. In particular, we implemented the VALD3 line list. Results. A new code, the Merged Parallelised Simplified ATLAS is presented. It combines the efficient generation of ODF, atmosphere modelling and spectral synthesis in local thermodynamic equilibrium, therefore being an all-in-one code. This all-in-one code provides more numerical functionality and is substantially faster compared to other available codes. The fully portable MPS-ATLAS code is validated against previous ATLAS9 calculations, the PHOENIX code calculations, and high quality observations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا