ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical analysis of stellar evolution

112   0   0.0 ( 0 )
 نشر من قبل David A. van Dyk
 تاريخ النشر 2009
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

Color-Magnitude Diagrams (CMDs) are plots that compare the magnitudes (luminosities) of stars in different wavelengths of light (colors). High nonlinear correlations among the mass, color, and surface temperature of newly formed stars induce a long narrow curved point cloud in a CMD known as the main sequence. Aging stars form new CMD groups of red giants and white dwarfs. The physical processes that govern this evolution can be described with mathematical models and explored using complex computer models. These calculations are designed to predict the plotted magnitudes as a function of parameters of scientific interest, such as stellar age, mass, and metallicity. Here, we describe how we use the computer models as a component of a complex likelihood function in a Bayesian analysis that requires sophisticated computing, corrects for contamination of the data by field stars, accounts for complications caused by unresolved binary-star systems, and aims to compare competing physics-based computer models of stellar evolution.



قيم البحث

اقرأ أيضاً

Spatial prediction of weather-elements like temperature, precipitation, and barometric pressure are generally based on satellite imagery or data collected at ground-stations. None of these data provide information at a more granular or hyper-local re solution. On the other hand, crowdsourced weather data, which are captured by sensors installed on mobile devices and gathered by weather-related mobile apps like WeatherSignal and AccuWeather, can serve as potential data sources for analyzing environmental processes at a hyper-local resolution. However, due to the low quality of the sensors and the non-laboratory environment, the quality of the observations in crowdsourced data is compromised. This paper describes methods to improve hyper-local spatial prediction using this varying-quality noisy crowdsourced information. We introduce a reliability metric, namely Veracity Score (VS), to assess the quality of the crowdsourced observations using a coarser, but high-quality, reference data. A VS-based methodology to analyze noisy spatial data is proposed and evaluated through extensive simulations. The merits of the proposed approach are illustrated through case studies analyzing crowdsourced daily average ambient temperature readings for one day in the contiguous United States.
Genomic surveillance of SARS-CoV-2 has been instrumental in tracking the spread and evolution of the virus during the pandemic. The availability of SARS-CoV-2 molecular sequences isolated from infected individuals, coupled with phylodynamic methods, have provided insights into the origin of the virus, its evolutionary rate, the timing of introductions, the patterns of transmission, and the rise of novel variants that have spread through populations. Despite enormous global efforts of governments, laboratories, and researchers to collect and sequence molecular data, many challenges remain in analyzing and interpreting the data collected. Here, we describe the models and methods currently used to monitor the spread of SARS-CoV-2, discuss long-standing and new statistical challenges, and propose a method for tracking the rise of novel variants during the epidemic.
71 - Jianfeng Wang , Jun Yu 2021
This study investigated the effect of harsh winter climate on the performance of high speed passenger trains in northern Sweden. Novel approaches based on heterogeneous statistical models were introduced to analyse the train performance in order to t ake the time-varying risks of train delays into consideration. Specifically, stratified Cox model and heterogeneous Markov chain model were used for modelling primary delays and arrival delays, respectively. Our results showed that the weather variables including temperature, humidity, snow depth, and ice/snow precipitation have significant impact on the train performance.
Ethane is the most abundant non-methane hydrocarbon in the Earths atmosphere and an important precursor of tropospheric ozone through various chemical pathways. Ethane is also an indirect greenhouse gas (global warming potential), influencing the atm ospheric lifetime of methane through the consumption of the hydroxyl radical (OH). Understanding the development of trends and identifying trend reversals in atmospheric ethane is therefore crucial. Our dataset consists of four series of daily ethane columns obtained from ground-based FTIR measurements. As many other decadal time series, our data are characterized by autocorrelation, heteroskedasticity, and seasonal effects. Additionally, missing observations due to instrument failure or unfavorable measurement conditions are common in such series. The goal of this paper is therefore to analyze trends in atmospheric ethane with statistical tools that correctly address these data features. We present selected methods designed for the analysis of time trends and trend reversals. We consider bootstrap inference on broken linear trends and smoothly varying nonlinear trends. In particular, for the broken trend model, we propose a bootstrap method for inference on the break location and the corresponding changes in slope. For the smooth trend model we construct simultaneous confidence bands around the nonparametrically estimated trend. Our autoregressive wild bootstrap approach, combined with a seasonal filter, is able to handle all issues mentioned above.
The historical and geographical spread from older to more modern languages has long been studied by examining textual changes and in terms of changes in phonetic transcriptions. However, it is more difficult to analyze language change from an acousti c point of view, although this is usually the dominant mode of transmission. We propose a novel analysis approach for acoustic phonetic data, where the aim will be to statistically model the acoustic properties of spoken words. We explore phonetic variation and change using a time-frequency representation, namely the log-spectrograms of speech recordings. We identify time and frequency covariance functions as a feature of the language; in contrast, mean spectrograms depend mostly on the particular word that has been uttered. We build models for the mean and covariances (taking into account the restrictions placed on the statistical analysis of such objects) and use these to define a phonetic transformation that models how an individual speaker would sound in a different language, allowing the exploration of phonetic differences between languages. Finally, we map back these transformations to the domain of sound recordings, allowing us to listen to the output of the statistical analysis. The proposed approach is demonstrated using recordings of the words corresponding to the numbers from one to ten as pronounced by speakers from five different Romance languages.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا