ترغب بنشر مسار تعليمي؟ اضغط هنا

Monte-Carlo methods for NLTE spectral synthesis of supernovae

119   0   0.0 ( 0 )
 نشر من قبل Mattias Ergon
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present JEKYLL, a new code for modelling of supernova (SN) spectra and lightcurves based on Monte-Carlo (MC) techniques for the radiative transfer. The code assumes spherical symmetry, homologous expansion and steady state for the matter, but is otherwise capable of solving the time-dependent radiative transfer problem in non-local-thermodynamic-equilibrium (NLTE). The method used was introduced in a series of papers by Lucy, but the full time-dependent NLTE capabilities of it have never been tested. Here, we have extended the method to include non-thermal excitation and ionization as well as charge-transfer and two-photon processes. Based on earlier work, the non-thermal rates are calculated by solving the Spencer-Fano equation. Using a method previously developed for the SUMO code, macroscopic mixing of the material is taken into account in a statistical sense. In addition, a statistical Markov-chain model is used to sample the emission frequency, and we introduce a method to control the sampling of the radiation field. Except for a description of JEKYLL, we provide comparisons with the ARTIS, SUMO and CMFGEN codes, which show good agreement in the calculated spectra as well as the state of the gas. In particular, the comparison with CMFGEN, which is similar in terms of physics but uses a different technique, shows that the Lucy method does indeed converge in the time-dependent NLTE case. Finally, as an example of the time-dependent NLTE capabilities of JEKYLL, we present a model of a Type IIb SN, taken from a set of models presented and discussed in detail in an accompanying paper. Based on this model we investigate the effects of NLTE, in particular those arising from non-thermal excitation and ionization, and find strong effects even on the bolometric lightcurve. This highlights the need for full NLTE calculations when simulating the spectra and lightcurves of SNe.



قيم البحث

اقرأ أيضاً

81 - L. Martinez 2020
The progenitor and explosion properties of type II supernovae (SNe II) are fundamental to understand the evolution of massive stars. Special interest has been given to the range of initial masses of their progenitors, but despite the efforts made, it is still uncertain. Direct imaging of progenitors in pre-explosion images point out an upper initial mass cutoff of $sim$18$M_{odot}$. However, this is in tension with previous studies in which progenitor masses inferred by light curve modelling tend to favour high-mass solutions. Moreover, it has been argued that light curve modelling alone cannot provide a unique solution for the progenitor and explosion properties of SNe II. We develop a robust method which helps us to constrain the physical parameters of SNe II by fitting simultaneously their bolometric light curve and the evolution of the photospheric velocity to hydrodynamical models using statistical inference techniques. Pre-supernova red supergiant models were created using the stellar evolution code MESA, varying the initial progenitor mass. The explosion of these progenitors was then processed through hydrodynamical simulations, where the explosion energy, synthesised nickel mass, and the latters spatial distribution within the ejecta were changed. We compare to observations via Markov chain Monte Carlo methods. We apply this method to a well-studied set of SNe with an observed progenitor in pre-explosion images and compare with results in the literature. Progenitor mass constraints are found to be consistent between our results and those derived by pre-SN imaging and the analysis of late-time spectral modelling. We have developed a robust method to infer progenitor and explosion properties of SN II progenitors which is consistent with other methods in the literature, which suggests that hydrodynamical modelling is able to accurately constrain physical properties of SNe II.
We propose the clock Monte Carlo technique for sampling each successive chain step in constant time. It is built on a recently proposed factorized transition filter and its core features include its O(1) computational complexity and its generality. W e elaborate how it leads to the clock factorized Metropolis (clock FMet) method, and discuss its application in other update schemes. By grouping interaction terms into boxes of tunable sizes, we further formulate a variant of the clock FMet algorithm, with the limiting case of a single box reducing to the standard Metropolis method. A theoretical analysis shows that an overall acceleration of ${rm O}(N^kappa)$ ($0 ! leq ! kappa ! leq ! 1$) can be achieved compared to the Metropolis method, where $N$ is the system size and the $kappa$ value depends on the nature of the energy extensivity. As a systematic test, we simulate long-range O$(n)$ spin models in a wide parameter regime: for $n ! = ! 1,2,3$, with disordered algebraically decaying or oscillatory Ruderman-Kittel-Kasuya-Yoshida-type interactions and with and without external fields, and in spatial dimensions from $d ! = ! 1, 2, 3$ to mean-field. The O(1) computational complexity is demonstrated, and the expected acceleration is confirmed. Its flexibility and its independence from the interaction range guarantee that the clock method would find decisive applications in systems with many interaction terms.
115 - L. Velazquez , S. Curilef 2010
In this work, we discuss the implications of a recently obtained equilibrium fluctuation-dissipation relation on the extension of the available Monte Carlo methods based on the consideration of the Gibbs canonical ensemble to account for the existenc e of an anomalous regime with negative heat capacities $C<0$. The resulting framework appears as a suitable generalization of the methodology associated with the so-called textit{dynamical ensemble}, which is applied to the extension of two well-known Monte Carlo methods: the Metropolis importance sample and the Swendsen-Wang clusters algorithm. These Monte Carlo algorithms are employed to study the anomalous thermodynamic behavior of the Potts models with many spin states $q$ defined on a $d$-dimensional hypercubic lattice with periodic boundary conditions, which successfully reduce the exponential divergence of decorrelation time $tau$ with the increase of the system size $N$ to a weak power-law divergence $taupropto N^{alpha}$ with $alphaapprox0.2$ for the particular case of the 2D 10-state Potts model.
215 - Ajay Jasra , Kody Law , 2017
This article reviews the application of advanced Monte Carlo techniques in the context of Multilevel Monte Carlo (MLMC). MLMC is a strategy employed to compute expectations which can be biased in some sense, for instance, by using the discretization of a associated probability law. The MLMC approach works with a hierarchy of biased approximations which become progressively more accurate and more expensive. Using a telescoping representation of the most accurate approximation, the method is able to reduce the computational cost for a given level of error versus i.i.d. sampling from this latter approximation. All of these ideas originated for cases where exact sampling from couples in the hierarchy is possible. This article considers the case where such exact sampling is not currently possible. We consider Markov chain Monte Carlo and sequential Monte Carlo methods which have been introduced in the literature and we describe different strategies which facilitate the application of MLMC within these methods.
Statistical signal processing applications usually require the estimation of some parameters of interest given a set of observed data. These estimates are typically obtained either by solving a multi-variate optimization problem, as in the maximum li kelihood (ML) or maximum a posteriori (MAP) estimators, or by performing a multi-dimensional integration, as in the minimum mean squared error (MMSE) estimators. Unfortunately, analytical expressions for these estimators cannot be found in most real-world applications, and the Monte Carlo (MC) methodology is one feasible approach. MC methods proceed by drawing random samples, either from the desired distribution or from a simpler one, and using them to compute consistent estimators. The most important families of MC algorithms are Markov chain MC (MCMC) and importance sampling (IS). On the one hand, MCMC methods draw samples from a proposal density, building then an ergodic Markov chain whose stationary distribution is the desired distribution by accepting or rejecting those candidate samples as the new state of the chain. On the other hand, IS techniques draw samples from a simple proposal density, and then assign them suitable weights that measure their quality in some appropriate way. In this paper, we perform a thorough review of MC methods for the estimation of static parameters in signal processing applications. A historical note on the development of MC schemes is also provided, followed by the basic MC method and a brief description of the rejection sampling (RS) algorithm, as well as three sections describing many of the most relevant MCMC and IS algorithms, and their combined use.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا