Accuracy of the geometric-mean method for determining spatial resolutions of tracking detectors in the presence of multiple Coulomb scattering


Abstract in English

The geometric-mean method is often used to estimate the spatial resolution of a position-sensitive detector probed by tracks. It calculates the resolution solely from measured track data without using a detailed tracking simulation and without considering multiple Coulomb scattering effects. Two separate linear track fits are performed on the same data, one excluding and the other including the hit from the probed detector. The geometric mean of the widths of the corresponding exclusive and inclusive residual distributions for the probed detector is then taken as a measure of the intrinsic spatial resolution of the probed detector: $sigma=sqrt{sigma_{ex}cdotsigma_{in}}$. The validity of this method is examined for a range of resolutions with a stand-alone Geant4 Monte Carlo simulation that specifically takes multiple Coulomb scattering in the tracking detector materials into account. Using simulated as well as actual tracking data from a representative beam test scenario, we find that the geometric-mean method gives systematically inaccurate spatial resolution results. Good resolutions are estimated as poor and vice versa. The more the resolutions of reference detectors and probed detector differ, the larger the systematic bias. An attempt to correct this inaccuracy by statistically subtracting multiple-scattering effects from geometric-mean results leads to resolutions that are typically too optimistic by 10-50%. This supports an earlier critique of this method based on simulation studies that did not take multiple scattering into account.

Download