ترغب بنشر مسار تعليمي؟ اضغط هنا

What have we learnt from pulsations of B-type stars?

207   0   0.0 ( 0 )
 نشر من قبل Jadwiga Daszy\\'nska-Daszkiewicz
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We review the main results obtained from our seismic studies of B-type main sequence pulsators, based on the ground-based, MOST, Kepler and BRITE observations. Important constraints on stellar opacities, convective overshooting and rotation are derived. In each studied case, a significant modification of the opacity profile at the depths corresponding to the temperature range $log{T}in (5.0-5.5)$ is indispensable to explain all pulsational properties. In particular, a huge amount of opacity (at least 200%) at the depth of the temperature $log T = 5.46$ (the nickel opacity) has to be added in early B-type stellar models to account for low frequencies which correspond to high-order g modes. The values of the overshooting parameter, $alpha_{rm ov}$, from our seismic studies is below 0.3. In the case of a few stars, the deeper interiors have to rotate faster to get the g-mode instability in the whole observed range.



قيم البحث

اقرأ أيضاً

Anonymous peer review is used by the great majority of computer science conferences. OpenReview is such a platform that aims to promote openness in peer review process. The paper, (meta) reviews, rebuttals, and final decisions are all released to pub lic. We collect 5,527 submissions and their 16,853 reviews from the OpenReview platform. We also collect these submissions citation data from Google Scholar and their non-peer-review
135 - Jessica D. Mink 2015
Despite almost all being acquired as photons, astronomical data from different instruments and at different stages in its life may exist in different formats to serve different purposes. Beyond the data itself, descriptive information is associated w ith it as metadata, either included in the data format or in a larger multi-format data structure. Those formats may be used for the acquisition, processing, exchange, and archiving of data. It has been useful to use similar formats, or even a single standard to ease interaction with data in its various stages using familiar tools. Knowledge of the evolution and advantages of present standards is useful before we discuss the future of how astronomical data is formatted. The evolution of the use of world coordinates in FITS is presented as an example.
Activity classification has observed great success recently. The performance on small dataset is almost saturated and people are moving towards larger datasets. What leads to the performance gain on the model and what the model has learnt? In this pa per we propose identity preserve transform (IPT) to study this problem. IPT manipulates the nuisance factors (background, viewpoint, etc.) of the data while keeping those factors related to the task (human motion) unchanged. To our surprise, we found popular models are using highly correlated information (background, object) to achieve high classification accuracy, rather than using the essential information (human motion). This can explain why an activity classification model usually fails to generalize to datasets it is not trained on. We implement IPT in two forms, i.e. image-space transform and 3D transform, using synthetic images. The tool will be made open-source to help study model and dataset design.
As the success of deep models has led to their deployment in all areas of computer vision, it is increasingly important to understand how these representations work and what they are capturing. In this paper, we shed light on deep spatiotemporal repr esentations by visualizing what two-stream models have learned in order to recognize actions in video. We show that local detectors for appearance and motion objects arise to form distributed representations for recognizing human actions. Key observations include the following. First, cross-stream fusion enables the learning of true spatiotemporal features rather than simply separate appearance and motion features. Second, the networks can learn local representations that are highly class specific, but also generic representations that can serve a range of classes. Third, throughout the hierarchy of the network, features become more abstract and show increasing invariance to aspects of the data that are unimportant to desired distinctions (e.g. motion patterns across various speeds). Fourth, visualizations can be used not only to shed light on learned representations, but also to reveal idiosyncracies of training data and to explain failure cases of the system.
Deep learning has led to significant improvement in text summarization with various methods investigated and improved ROUGE scores reported over the years. However, gaps still exist between summaries produced by automatic summarizers and human profes sionals. Aiming to gain more understanding of summarization systems with respect to their strengths and limits on a fine-grained syntactic and semantic level, we consult the Multidimensional Quality Metric(MQM) and quantify 8 major sources of errors on 10 representative summarization models manually. Primarily, we find that 1) under similar settings, extractive summarizers are in general better than their abstractive counterparts thanks to strength in faithfulness and factual-consistency; 2) milestone techniques such as copy, coverage and hybrid extractive/abstractive methods do bring specific improvements but also demonstrate limitations; 3) pre-training techniques, and in particular sequence-to-sequence pre-training, are highly effective for improving text summarization, with BART giving the best results.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا