Do you want to publish a course? Click here

New vertex reconstruction algorithms for CMS

322   0   0.0 ( 0 )
 Publication date 2003
  fields Physics
and research's language is English




Ask ChatGPT about the research

The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding) and a statistical problem (``vertex fitting). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels



rate research

Read More

76 - K.Wozniak , et al. 2006
The PHOBOS experiment at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory is studying interactions of heavy nuclei at the largest energies available in the laboratory. The high multiplicity of particles created in heavy ion collisions makes precise vertex reconstruction possible using information from a spectrometer and a specialized vertex detector with relatively small acceptances. For lower multiplicity events, a large acceptance, single layer multiplicity detector is used and special algorithms are developed to reconstruct the vertex, resulting in high efficiency at the expense of poorer resolution. The algorithms used in the PHOBOS experiment and their performance are presented.
97 - Eric Gendron 2006
Context. The knowledge of the point-spread function compensated by adaptive optics is of prime importance in several image restoration techniques such as deconvolution and astrometric/photometric algorithms. Wavefront-related data from the adaptive optics real-time computer can be used to accurately estimate the point-spread function in adaptive optics observations. The only point-spread function reconstruction algorithm implemented on astronomical adaptive optics system makes use of particular functions, named $U_{ij}$. These $U_{ij}$ functions are derived from the mirror modes, and their number is proportional to the square number of these mirror modes. Aims. We present here two new algorithms for point-spread function reconstruction that aim at suppressing the use of these $U_{ij}$ functions to avoid the storage of a large amount of data and to shorten the computation time of this PSF reconstruction. Methods. Both algorithms take advantage of the eigen decomposition of the residual parallel phase covariance matrix. In the first algorithm, the use of a basis in which the latter matrix is diagonal reduces the number of $U_{ij}$ functions to the number of mirror modes. In the second algorithm, this eigen decomposition is used to compute phase screens that follow the same statistics as the residual parallel phase covariance matrix, and thus suppress the need for these $U_{ij}$ functions. Results. Our algorithms dramatically reduce the number of $U_{ij}$ functions to be computed for the point-spread function reconstruction. Adaptive optics simulations show the good accuracy of both algorithms to reconstruct the point-spread function.
Since the 1970s, much of traditional interferometric imaging has been built around variations of the CLEAN algorithm, in both terminology, methodology, and algorithm development. Recent developments in applying new algorithms from convex optimization to interferometry has allowed old concepts to be viewed from a new perspective, ranging from image restoration to the development of computationally distributed algorithms. We present how this has ultimately led the authors to new perspectives in wide-field imaging, allowing for the first full individual non-coplanar corrections applied during imaging over extremely wide-fields of view for the Murchison Widefield Array (MWA) telescope. Furthermore, this same mathematical framework has provided a novel understanding of wide-band polarimetry at low frequencies, where instrumental channel depolarization can be corrected through the new $deltalambda^2$-projection algorithm. This is a demonstration that new algorithm development outside of traditional radio astronomy is valuable for the new theoretical and practical perspectives gained. These perspectives are timely with the next generation of radio telescopes coming online.
Starting in the middle of November 2002, the CMS experiment undertook an evaluation of the European DataGrid Project (EDG) middleware using its event simulation programs. A joint CMS-EDG task force performed a stress test by submitting a large number of jobs to many distributed sites. The EDG testbed was complemented with additional CMS-dedicated resources. A total of ~ 10000 jobs consisting of two different computational types were submitted from four different locations in Europe over a period of about one month. Nine sites were active, providing integrated resources of more than 500 CPUs and about 5 TB of disk space (with the additional use of two Mass Storage Systems). Descriptions of the adopted procedures, the problems encountered and the corresponding solutions are reported. Results and evaluations of the test, both from the CMS and the EDG perspectives, are described.
In this work, we consider alternative discretizations for PDEs which use expansions involving integral operators to approximate spatial derivatives. These constructions use explicit information within the integral terms, but treat boundary data implicitly, which contributes to the overall speed of the method. This approach is provably unconditionally stable for linear problems and stability has been demonstrated experimentally for nonlinear problems. Additionally, it is matrix-free in the sense that it is not necessary to invert linear systems and iteration is not required for nonlinear terms. Moreover, the scheme employs a fast summation algorithm that yields a method with a computational complexity of $mathcal{O}(N)$, where $N$ is the number of mesh points along a direction. While much work has been done to explore the theory behind these methods, their practicality in large scale computing environments is a largely unexplored topic. In this work, we explore the performance of these methods by developing a domain decomposition algorithm suitable for distributed memory systems along with shared memory algorithms. As a first pass, we derive an artificial CFL condition that enforces a nearest-neighbor communication pattern and briefly discuss possible generalizations. We also analyze several approaches for implementing the parallel algorithms by optimizing predominant loop structures and maximizing data reuse. Using a hybrid design that employs MPI and Kokkos for the distributed and shared memory components of the algorithms, respectively, we show that our methods are efficient and can sustain an update rate $> 1times10^8$ DOF/node/s. We provide results that demonstrate the scalability and versatility of our algorithms using several different PDE test problems, including a nonlinear example, which employs an adaptive time-stepping rule.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا