No Arabic abstract
Malliavin weight sampling (MWS) is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the systems dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.
In this review, we present a simple guide for researchers to obtain pseudo-random samples with censored data. We focus our attention on the most common types of censored data, such as type I, type II, and random censoring. We discussed the necessary steps to sample pseudo-random values from long-term survival models where an additional cure fraction is informed. For illustrative purposes, these techniques are applied in the Weibull distribution. The algorithms and codes in R are presented, enabling the reproducibility of our study.
The SMEFTsim package is designed to enable automated computations in the Standard Model Effective Field Theory (SMEFT), where the SM Lagrangian is extended with a complete basis of dimension six operators. It contains a set of models written in FeynRules and pre-exported to the UFO format, for usage within Monte Carlo event generators. The models differ in the flavor assumptions and in the input parameters chosen for the electroweak sector. The present document provides a self-contained, pedagogical reference that collects all the theoretical and technical aspects relevant to the use of SMEFTsim and it documents the release of version 3.0. Compared to the previous release, the description of Higgs production via gluon-fusion in the SM has been significantly improved, two flavor assumptions for studies in the top quark sector have been added, and a new feature has been implemented, that allows the treatment of linearized SMEFT corrections to the propagators of unstable particles.
Multi-image alignment, bringing a group of images into common register, is an ubiquitous problem and the first step of many applications in a wide variety of domains. As a result, a great amount of effort is being invested in developing efficient multi-image alignment algorithms. Little has been done, however, to answer fundamental practical questions such as: what is the comparative performance of existing methods? is there still room for improvement? under which conditions should one technique be preferred over another? does adding more images or prior image information improve the registration results? In this work, we present a thorough analysis and evaluation of the main multi-image alignment methods which, combined with theoretical limits in multi-image alignment performance, allows us to organize them under a common framework and provide practical answers to these essential questions.
We have recently proposed a new method of flow analysis, based on a cumulant expansion of multiparticle azimuthal correlations. Here, we describe the practical implementation of the method. The major improvement over traditional methods is that the cumulant expansion eliminates order by order correlations not due to flow, which are often large but usually neglected.
In this guide, we present how to perform constraint-based causal discovery using three popular software packages: pcalg (with add-ons tpc and micd), bnlearn, and TETRAD. We focus on how these packages can be used with observational data and in the presence of mixed data (i.e., data where some variables are continuous, while others are categorical), a known time ordering between variables, and missing data. Throughout, we point out the relative strengths and limitations of each package, as well as give practical recommendations. We hope this guide helps anyone who is interested in performing constraint-based causal discovery on their data.