ترغب بنشر مسار تعليمي؟ اضغط هنا

Probabilistic Jet Algorithms

115   0   0.0 ( 0 )
 نشر من قبل E. W. Nigel Glover
 تاريخ النشر 1997
  مجال البحث
والبحث باللغة English
 تأليف W.T. Giele




اسأل ChatGPT حول البحث

Conventional jet algorithms are based on a deterministic view of the underlying hard scattering process. Each outgoing parton from the hard scattering is associated with a hard, well separated jet. This approach is very successful because it allows quantitative predictions using lowest order perturbation theory. However, beyond leading order in the coupling constant, when quantum fluctuations are included, deterministic jet algorithms will become problematic precisely because they attempt to describe an inherently stochastic quantum process using deterministic, classical language. This demands a shift in the way we view jet algorithms. We make a first attempt at constructing more probabilistic jet algorithms that reflect the properties of the underlying hard scattering and explore the basic properties and problems of such an approach.


قيم البحث

اقرأ أيضاً

We correct an important misprint in the journal version of our earlier work on New Jet Cluster Algorithms: Next-to-leading Order QCD..., published in Nucl. Phys. B 370 (1992) 310, which may have lead to an incorrect parametrisation of the leading ord er QCD coefficients for the JADE type jet cluster algorithms.
We determine the jet vertex for Mueller-Navelet jets and forward jets in the small-cone approximation for two particular choices of jet algoritms: the kt algorithm and the cone algorithm. These choices are motivated by the extensive use of such algor ithms in the phenomenology of jets. The differences with the original calculations of the small-cone jet vertex by Ivanov and Papa, which is found to be equivalent to a formerly algorithm proposed by Furman, are shown at both analytic and numerical level, and turn out to be sizeable. A detailed numerical study of the error introduced by the small-cone approximation is also presented, for various observables of phenomenological interest. For values of the jet radius R=0.5, the use of the small-cone approximation amounts to an error of about 5% at the level of cross section, while it reduces to less than 2% for ratios of distributions such as those involved in the measure of the azimuthal decorrelation of dijets.
This paper considers the problem of cardinality estimation in data stream applications. We present a statistical analysis of probabilistic counting algorithms, focusing on two techniques that use pseudo-random variates to form low-dimensional data sk etches. We apply conventional statistical methods to compare probabilistic algorithms based on storing either selected order statistics, or random projections. We derive estimators of the cardinality in both cases, and show that the maximal-term estimator is recursively computable and has exponentially decreasing error bounds. Furthermore, we show that the estimators have comparable asymptotic efficiency, and explain this result by demonstrating an unexpected connection between the two approaches.
We connect the study of pseudodeterministic algorithms to two major open problems about the structural complexity of $mathsf{BPTIME}$: proving hierarchy theorems and showing the existence of complete problems. Our main contributions can be summarised as follows. 1. We build on techniques developed to prove hierarchy theorems for probabilistic time with advice (Fortnow and Santhanam, FOCS 2004) to construct the first unconditional pseudorandom generator of polynomial stretch computable in pseudodeterministic polynomial time (with one bit of advice) that is secure infinitely often against polynomial-time computations. As an application of this construction, we obtain new results about the complexity of generating and representing prime numbers. 2. Oliveira and Santhanam (STOC 2017) established unconditionally that there is a pseudodeterministic algorithm for the Circuit Acceptance Probability Problem ($mathsf{CAPP}$) that runs in sub-exponential time and is correct with high probability over any samplable distribution on circuits on infinitely many input lengths. We show that improving this running time or obtaining a result that holds for every large input length would imply new time hierarchy theorems for probabilistic time. In addition, we prove that a worst-case polynomial-time pseudodeterministic algorithm for $mathsf{CAPP}$ would imply that $mathsf{BPP}$ has complete problems. 3. We establish an equivalence between pseudodeterministic construction of strings of large $mathsf{rKt}$ complexity (Oliveira, ICALP 2019) and the existence of strong hierarchy theorems for probabilistic time. More generally, these results suggest new approaches for designing pseudodeterministic algorithms for search problems and for unveiling the structure of probabilistic time.
There has been a recent resurgence of interest in explainable artificial intelligence (XAI) that aims to reduce the opaqueness of AI-based decision-making systems, allowing humans to scrutinize and trust them. Prior work in this context has focused o n the attribution of responsibility for an algorithms decisions to its inputs wherein responsibility is typically approached as a purely associational concept. In this paper, we propose a principled causality-based approach for explaining black-box decision-making systems that addresses limitations of existing methods in XAI. At the core of our framework lies probabilistic contrastive counterfactuals, a concept that can be traced back to philosophical, cognitive, and social foundations of theories on how humans generate and select explanations. We show how such counterfactuals can quantify the direct and indirect influences of a variable on decisions made by an algorithm, and provide actionable recourse for individuals negatively affected by the algorithms decision. Unlike prior work, our system, LEWIS: (1)can compute provably effective explanations and recourse at local, global and contextual levels (2)is designed to work with users with varying levels of background knowledge of the underlying causal model and (3)makes no assumptions about the internals of an algorithmic system except for the availability of its input-output data. We empirically evaluate LEWIS on three real-world datasets and show that it generates human-understandable explanations that improve upon state-of-the-art approaches in XAI, including the popular LIME and SHAP. Experiments on synthetic data further demonstrate the correctness of LEWISs explanations and the scalability of its recourse algorithm.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا