No Arabic abstract
The ALICE High Level Trigger has to process data online, in order to select interesting (sub)events, or to compress data efficiently by modeling techniques. Focusing on the main data source, the Time Projection Chamber (TPC), we present two pattern recognition methods under investigation: a sequential approach (cluster finder and track follower) and an iterative approach (track candidate finder and cluster deconvoluter). We show, that the former is suited for pp and low multiplicity PbPb collisions, whereas the latter might be applicable for high multiplicity PbPb collisions of dN/dy>3000. Based on the developed tracking schemes we show that using modeling techniques a compression factor of around 10 might be achievable.
The ALICE High Level Trigger has to process data online, in order to select interesting (sub)events, or to compress data efficiently by modeling techniques.Focusing on the main data source, the Time Projection Chamber (TPC), we present two pattern recognition methods under investigation: a sequential approach cluster finder and track follower) and an iterative approach (track candidate finder and cluster deconvoluter). We show, that the former is suited for pp and low multiplicity PbPb collisions, whereas the latter might be applicable for high multiplicity PbPb collisions, if it turns out, that more than 8000 charged particles would have to be reconstructed inside the TPC. Based on the developed tracking schemes we show, that using modeling techniques a compression factor of around 10 might be achievable
The High Level Trigger (HLT) of the ALICE experiment requires massive parallel computing. One of the main tasks of the HLT system is two-dimensional cluster finding on raw data of the Time Projection Chamber (TPC), which is the main data source of ALICE. To reduce the number of computing nodes needed in the HLT farm, FPGAs, which are an intrinsic part of the system, will be utilized for this task. VHDL code implementing the Fast Cluster Finder algorithm, has been written, a testbed for functional verification of the code has been developed, and the code has been synthesized
The ALICE High-Level Trigger processes data online, to either select interesting (sub-) events, or to compress data efficiently by modeling techniques. Focusing on the main data source, the Time Projection Chamber, the architecure of the system and the current state of the tracking and compression methods are outlined.
The High Level Trigger (HLT) system of the ALICE experiment is an online event filter and trigger system designed for input bandwidths of up to 25 GB/s at event rates of up to 1 kHz. The system is designed as a scalable PC cluster, implementing several hundred nodes. The transport of data in the system is handled by an object-oriented data flow framework operating on the basis of the publisher-subscriber principle, being designed fully pipelined with lowest processing overhead and communication latency in the cluster. In this paper, we report the latest measurements where this framework has been operated on five different sites over a global north-south link extending more than 10,000 km, processing a ``real-time data flow.
The ALICE experiment at the LHC is equipped with an electromagnetic calorimeter (EMCal) designed to enhance its capabilities for jet measurement. In addition, the EMCal enables triggering on high energy jets. Based on the previous development made for the Photon Spectrometer (PHOS) level-0 trigger, a specific electronic upgrade was designed in order to allow fast triggering on high energy jets (level-1). This development was made possible by using the latest generation of FPGAs which can deal with the instantaneous incoming data rate of 26 Gbit/s and process it in less than 4 {mu}s.