Do you want to publish a course? Click here

High-speed data transfer with FPGAs and QSFP+ modules

127   0   0.0 ( 0 )
 Added by Francesca Lo Cicero
 Publication date 2011
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present test results and characterization of a data transmission system based on a last generation FPGA and a commercial QSFP+ (Quad Small Form Pluggable +) module. QSFP+ standard defines a hot-pluggable transceiver available in copper or optical cable assemblies for an aggregated bandwidth of up to 40 Gbps. We implemented a complete testbench based on a commercial development card mounting an Altera Stratix IV FPGA with 24 serial transceivers at 8.5 Gbps, together with a custom mezzanine hosting three QSFP+ modules. We present test results and signal integrity measurements up to an aggregated bandwidth of 12 Gbps.

rate research

Read More

69 - R. Giordano , S.Perrella , 2018
High-speed serial links implemented in SRAM-based FPGAs have been extensively used in the trigger and data acquisition systems of High Energy Physics experiments. Usually, their application has been restricted to off-detector, mostly due the sensitivity of SRAM-based FPGA to radiation faults (single event upsets). However, the device tolerance to radiation environments can be achieved by adopting dedicated mitigation techniques such as information redundancy, hardware redundancy and configuration scrubbing. In this work, we discuss the design of a bi-directional serial link running at 6.25 Gbps based on a Xilinx Kintex-7 FPGA. The link is protected against single event upsets by means of all the above-mentioned methods. A self-synchronizing scrambler is used for DC-balance and data randomization, while the subsequent Reed-Solomon encoder/decoder detects and corrects bursts of errors in the transmitted data. The error correction capability of the line code is further increased by adopting the interleaving technique. Besides, in order to completely take advantage of available bandwidth and to cope with different rates of radiation-induced faults, the link can modulate the protection level of the Reed-Solomon code. The reliability of the link is also improved by means of modular redundancy on the frame alignment block. Besides, on the same FPGA, a scrubber repairs corrupted configuration frames in real-time. We present the test results carried out using the fault injection method. We show the performance of the link in terms of mean time between failures (MTBF) and fault tolerance to upsets.
A PC based high speed silicon microstrip beam telescope consisting of several independent modules is presented. Every module contains an AC-coupled double sided silicon microstrip sensor and a complete set of analog and digital signal processing electronics. A digital bus connects the modules with the DAQ PC. A trigger logic unit coordinates the operation of all modules of the telescope. The system architecture allows easy integration of any kind of device under test into the data acquisition chain. Signal digitization, pedestal correction, hit detection and zero suppression are done by hardware inside the modules, so that the amount of data per event is reduced by a factor of 80 compared to conventional readout systems. In combination with a two level data acquisition scheme, this allows event rates up to 7.6 kHz. This is a factor of 40 faster than conventional VME based beam telescopes while comparable analog performance is maintained achieving signal to noise ratios of up to 70:1. The telescope has been tested in the SPS testbeam at CERN. It has been adopted as the reference instrument for testbeam studies for the ATLAS pixel detector development.
Resistive switching devices, important for emerging memory and neuromorphic applications, face significant challenges related to control of delicate filamentary states in the oxide material. As a device switches, its rapid conductivity change is involved in a positive feedback process that would lead to runaway destruction of the cell without current, voltage, or energy limitation. Typically, cells are directly patterned on MOS transistors to limit the current, but this approach is very restrictive as the necessary integration limits the materials available as well as the fabrication cycle time. In this article we propose an external circuit to cycle resistive memory cells, capturing the full transfer curves while driving the cells in such a way to suppress runaway transitions. Using this circuit, we demonstrate the acquisition of $10^5$ I-V loops per second without the use of on-wafer current limiting transistors. This setup brings voltage sweeping measurements to a relevant timescale for applications, and enables many new experimental possibilities for device evaluation in a statistical context.
Ultracold neutrons (UCN) with kinetic energies up to 300 neV can be stored in material or magnetic confinements for hundreds of seconds. This makes them a very useful tool for probing fundamental symmetries of nature, by searching for charge-parity violation by a neutron electric dipole moment, and yielding important parameters for Big Bang nucleosynthesis, e.g. in neutron-lifetime measurements. Further increasing the intensity of UCN sources is crucial for next-generation experiments. Advanced Monte Carlo (MC) simulation codes are important in optimization of neutron optics of UCN sources and of experiments, but also in estimation of systematic effects, and in bench-marking of analysis codes. Here we will give a short overview of recent MC simulation activities in this field.
Artificial neural networks are already widely used for physics analysis, but there are only few applications within low-level hardware triggers, and typically only with small networks. Modern high-end FPGAs offer Tera-scale arithmetic performance, and thereby provide a significant amount of operations per data set even for MHz-range data rates. We present a bottom-up approach of implementing typical neural network layers, in which we took both the special constraints that come from high-performance trigger systems, such as the ATLAS hardware trigger at the LHC, as well as an efficient implementation into account. By specifically designing each layer type to match our requirements, we could develop a framework that reaches 90 to 100% processing efficiency for large layers, requires only few extra resources for data flow and controlling, and offers latencies in the range of only tens to hundreds of nanoseconds for entire (deep) networks. Additionally, a toolkit was built around these optimized layer implementations, which facilitates the creation of the FPGA implementation of a trained NN model.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا