No Arabic abstract
We have recently proposed a new method of flow analysis, based on a cumulant expansion of multiparticle azimuthal correlations. Here, we describe the practical implementation of the method. The major improvement over traditional methods is that the cumulant expansion eliminates order by order correlations not due to flow, which are often large but usually neglected.
The SMEFTsim package is designed to enable automated computations in the Standard Model Effective Field Theory (SMEFT), where the SM Lagrangian is extended with a complete basis of dimension six operators. It contains a set of models written in FeynRules and pre-exported to the UFO format, for usage within Monte Carlo event generators. The models differ in the flavor assumptions and in the input parameters chosen for the electroweak sector. The present document provides a self-contained, pedagogical reference that collects all the theoretical and technical aspects relevant to the use of SMEFTsim and it documents the release of version 3.0. Compared to the previous release, the description of Higgs production via gluon-fusion in the SM has been significantly improved, two flavor assumptions for studies in the top quark sector have been added, and a new feature has been implemented, that allows the treatment of linearized SMEFT corrections to the propagators of unstable particles.
The azimuthal correlations between local flow planes at different (pseudo)rapidities ($eta$) may reveal important details of the initial nuclear matter density distributions in heavy-ion collisions. Extensive experimental measurements of a factorization ratio ($r_2$) and its derivative ($F_2$) have shown evidence of the longitudinal flow-plane decorrelation. However, nonflow effects also affect this observable and prevent a quantitative understanding of the phenomenon. In this paper, to distinguish decorrelation and nonflow effects, we propose a new cumulant observable, $T_2$, which largely suppresses nonflow. The technique sensitivity to different initial-state scenarios and nonflow effects are tested with a simple Monte Carlo model, and in the end, the method is applied to events simulated by a multiphase transport model (AMPT) for Au+Au collisions at $sqrt{s_{rm NN}} =200$ GeV.
Malliavin weight sampling (MWS) is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the systems dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.
Multi-image alignment, bringing a group of images into common register, is an ubiquitous problem and the first step of many applications in a wide variety of domains. As a result, a great amount of effort is being invested in developing efficient multi-image alignment algorithms. Little has been done, however, to answer fundamental practical questions such as: what is the comparative performance of existing methods? is there still room for improvement? under which conditions should one technique be preferred over another? does adding more images or prior image information improve the registration results? In this work, we present a thorough analysis and evaluation of the main multi-image alignment methods which, combined with theoretical limits in multi-image alignment performance, allows us to organize them under a common framework and provide practical answers to these essential questions.
In this review, we present a simple guide for researchers to obtain pseudo-random samples with censored data. We focus our attention on the most common types of censored data, such as type I, type II, and random censoring. We discussed the necessary steps to sample pseudo-random values from long-term survival models where an additional cure fraction is informed. For illustrative purposes, these techniques are applied in the Weibull distribution. The algorithms and codes in R are presented, enabling the reproducibility of our study.