Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
1996, Journal of VLSI Signal Processing
The future Large Hadron Collider (LHC) to be built at CERN 1 , by the turn of the millenium, provides an ample source of challenging real-time computational problems. We report here some results from a collaboration between CERN EAST 2 (RD-11) group and DEC-PRL PAM 3 team. We present implementations of the four foremost LHC algorithms on DECPeRLe-1 [1]. Our machine is the only one which presently meets the requirements from CERN (100 kHz event rate), except for another dedicated FPGA-based machine built for just one of the algorithms . All other implementations based on single and multiprocessor general purpose computing systems fall short either of computing power, or of I/O resources or both.
Nuclear Instruments & Methods in Physics Research Section A-accelerators Spectrometers Detectors and Associated Equipment, 1995
The future Large Hadron Collider (LHC), to be built at CERN, presents among other technological challenges a formidable problem of real-time data analysis. At a primary event rate of 40 MHz, a multi-stage trigger system has to analyze data to decide which is the fraction of events that should be preserved on permanent storage for further analysis. We report on implementations of local algorithms for feature extraction as part of triggering, using the detectors of the proposed ATLAS experiment as a model. The algorithms were implemented for a decision frequency of 100 kHz, on different data-driven programmable devices based on structures of field-programmable gate arrays and memories. The implementations were demonstrated at full speed with emulated input, and were also integrated into a prototype detector running in a test beam at CERN, in June 1994.
2017
In this thesis, I studied the feasibility of running computer data analysis programs from the Worldwide LHC Computing Grid, in particular large-scale simulations of the ATLAS experiment at the CERN LHC, on current general purpose High Performance Computing (HPC) systems. An approach for integrating HPC systems into the Grid is proposed, which has been implemented and tested on the „Todi” HPC machine at the Swiss National Supercomputing Centre (CSCS). Over the course of the test, more than 500000 CPU-hours of processing time have been provided to ATLAS, which is roughly equivalent to the combined computing power of the two ATLAS clusters at the University of Bern. This showed that current HPC systems can be used to efficiently run large-scale simulations of the ATLAS detector and of the detected physics processes. As a first conclusion of my work, one can argue that, in perspective, running large-scale tasks on a few large machines might be more cost-effective than running on relativ...
Computing in Science & Engineering, 2000
A high-performance computation platform based on field-programmable gate arrays targets nuclear and particle physics experiment applications. The system can be constructed or scaled into a supercomputer-equivalent size for detector data processing by inserting compute nodes into advanced telecommunications computing architecture (ATCA) crates. Among the case study results are that one ATCA crate can provide a computation capability equivalent to hundreds of commodity PCs for Hades online particle track reconstruction and Cherenkov ring recognition.
2016
Consolidation and upgrades of accelerator equipment during the first long LHC shutdown period enabled particle collisions at energy levels almost twice higher compared to the first operational phase. Consequently, the software infrastructure providing vital information for machine operation and its optimisation needs to be updated to keep up with the challenges imposed by the increasing amount of collected data and the complexity of analysis. Current tools, designed more than a decade ago, have proven their reliability by significantly outperforming initially provisioned workloads, but are unable to scale efficiently to satisfy the growing needs of operators and hardware experts. In this paper we present our progress towards the development of a new workload-driven solution for LHC transient data analysis, based on identified user requirements. An initial setup and study of modern data storage and processing engines appropriate for the accelerator data analysis was conducted. First ...
Computer Physics Communications
At the Large Hadron Collider at CERN in Geneva, Switzerland, atomic nuclei are collided at ultrarelativistic energies. Many final-state particles are produced in each collision and their properties are measured by the ALICE detector. The detector signals induced by the produced particles are digitized leading to data rates that are in excess of 48 GB/s. The ALICE High Level Trigger (HLT) system pioneered the use of FPGA-and GPU-based algorithms to reconstruct charged-particle trajectories and reduce the data size in real time. The results of the reconstruction of the collision events, available online, are used for high level data quality and detector-performance monitoring and real-time timedependent detector calibration. The online data compression techniques developed and used in the ALICE HLT have more than quadrupled the amount of data that can be stored for offline event processing.
2015
A pilot project for the use of GPUs (Graphics processing units) in online triggering ap- plications for high energy physics experiments (HEP) is presented. GPUs offer a highly parallel architecture and the fact that most of the chip resources are devoted to computa- tion. Moreover, they allow to achieve a large computing power using a limited amount of space and power. The application of online parallel computing on GPUs is shown for the synchronous low level trigger of NA62 experiment at CERN. Direct GPU communication using a FPGA-based board has been exploited to reduce the data transmission latency and results on a first field test at CERN will be highlighted. This work is part of a wider project named GAP (GPU application project), intended to study the use of GPUs in real-time applications in both HEP and medical imagin
In this work we present technical details and recent developments for a computing cluster working in a GRID environment, configured for high energy physics experiments at the National Institute of Physics and Nuclear Engineering. Main ideas and concepts behind the GRID technology are described. Two Virtual Organizations (VO) LHCb and ILC using GRID resources for Monte Carlo production, data analysis and data storage are presented together with the recently initiated development of their specific tools.
IEEE Transactions on Parallel and Distributed Systems
International high-energy particle physics research centers, like CERN and Fermilab, require excessive studies and simulations to plan for the upcoming upgrades of the world's largest particle accelerators, and the design of future machines given the technological challenges and tight budgetary constraints. The Beam Longitudinal Dynamics (BLonD) simulator suite incorporates the most detailed and complex physics phenomena in the field of longitudinal beam dynamics, required for providing extremely accurate predictions. Modern challenges in beam dynamics dictate for longer, larger and numerous simulation studies to draw meaningful conclusions that will drive the baseline choices for the daily operation of current machines and the design choices of future projects. These studies are extremely time consuming, and would be impractical to perform without a High-Performance Computing oriented simulator framework. In this article, at first, we design and evaluate a highly-optimized distributed version of BLonD. We combine approximate computing techniques, and leverage a dynamic load-balancing scheme to relax synchronization and improve scalability. In addition, we employ GPUs to accelerate the distributed implementation. We evaluate the highly optimized distributed beam longitudinal dynamics simulator in a supercomputing system and demonstrate speedups of more than two orders of magnitude when run on 32 GPU platforms, w.r.t. the previous state-of-art. By driving a wide range of new studies, the proposed high performance beam longitudinal dynamics simulator forms an invaluable tool for accelerator physicists.
arXiv (Cornell University), 2023
Quantum computers offer an intriguing path for a paradigmatic change of computing in the natural sciences and beyond, with the potential for achieving a so-called quantum advantage, namely a significant (in some cases exponential) speed-up of numerical simulations. The rapid development of hardware devices with various realizations of qubits enables the execution of small scale but representative applications on quantum computers. In particular, the high-energy physics community plays a pivotal role in accessing the power of quantum computing, since the field is a driving source for challenging computational problems. This concerns, on the theoretical side, the exploration of models which are very hard or even impossible to address with classical techniques and, on the experimental side, the enormous data challenge of newly emerging experiments, such as the upgrade of the Large Hadron Collider. In this roadmap paper, led by CERN, DESY and IBM, we provide the status of high-energy physics quantum computations and give examples for theoretical and experimental target benchmark applications, which can be addressed in the near future. Having the IBM 100 ⊗ 100 challenge in mind, where possible, we also provide resource estimates for the examples given using error mitigated quantum computing.
IEEE Transactions on Parallel and Distributed Systems, 2011
IEEE Transactions on Nuclear Science, 2008
Performance, reliability and scalability in data-access are key issues in the context of the computing Grid and High Energy Physics data processing and analysis applications, in particular considering the large data size and I/O load that a Large Hadron Collider data centre has to support. In this paper we present the technical details and the results of a large scale validation and performance measurement employing different data-access platforms-namely CASTOR, dCache, GPFS and Scalla/Xrootd. The tests have been performed at the CNAF Tier-1, the central computing facility of the Italian National Institute for Nuclear Research (INFN). Our storage back-end was based on Fibre Channel disk-servers organized in a Storage Area Network, being the disk-servers connected to the computing farm via Gigabit LAN. We used 24 disk-servers, 260 TB of raw-disk space and 280 worker nodes as computing clients, able to run concurrently up to about 1100 jobs. The aim of the test was to perform sequential and random read/write accesses to the data, as well as more realistic access patterns, in order to evaluate efficiency, availability, robustness and performance of the various data-access solutions.
Advances in Parallel Computing, 2004
We present the apeNEXT project which is currently developing a massively parallel computer with a multi-TFlops performance. Like previous APE machines, the new supercomputer is completely custom designed and is specifically optimized for simulating the theory of strong interactions, quantum chromodynamics (QCD). We assess the performance for key application kernels and make a comparison with other machines used for these kind of simulations. Finally, we give an outlook on future developments.
2009
The continual improvement of semiconductor technology has provided rapid advancements in device frequency and density. Designers of electronics systems for high-energy physics (HEP) have benefited from these advancements, transitioning many designs from fixed-function ASICs to more flexible FPGA-based platforms. Today's FPGA devices provide a significantly higher amount of resources than those available during the initial Large Hadron Collider design phase. To take advantage of the capabilities of future FPGAs in the next generation of HEP experiments, designers must not only anticipate further improvements in FPGA hardware, but must also adopt design tools and methodologies that can scale along with that hardware. In this paper, we outline the major trends in FPGA hardware, describe the design challenges these trends will present to developers of HEP electronics, and discuss a range of techniques that can be adopted to overcome these challenges.
2004
The ATLAS Collaboration at CERN is preparing for the data taking and analysis at the LHC that will start in 2007. Therefore, a series of Data Challenges was started in 2002 whose goals are the validation of the Computing Model, of the complete software suite, of the data model, and to ensure the correctness of the technical choices to be made for the final offline computing environment. A major feature of the first Data Challenge (DC1) was the preparation and the deployment of the software required for the production of large event samples as a worldwide distributed activity.
Proceedings Particle Accelerator Conference, 1995
The Los Alamos Accelerator Code Group (LAACG) is a national resource for members of the accelerator community who use and/or develop software for the design and analysis of particle accelerators, beam transport systems, light sources, storage rings, and components of these systems. Below we describe the LAACG' s activities in high performance computing, maintenance and enhancement of POISSON/SUPERFISH and related codes and the dissemination of information on the INTERNET.
International Journal of Modern Physics A, 2005
In this review, the computing challenges facing the current and next generation of high energy physics experiments will be discussed. High energy physics computing represents an interesting infrastructure challenge as the use of large-scale commodity computing clusters has increased. The causes and ramifications of these infrastructure challenges will be outlined. Increasing requirements, limited physical infrastructure at computing facilities, and limited budgets have driven many experiments to deploy distributed computing solutions to meet the growing computing needs for analysis reconstruction, and simulation. The current generation of experiments have developed and integrated a number of solutions to facilitate distributed computing. The current work of the running experiments gives an insight into the challenges that will be faced by the next generation of experiments and the infrastructure that will be needed.
IEEE Transactions Nuclear Science, 2001
Field - Programmable Gate Array, 2017
In this chapter, we describe the design of a field programmable gate array (FPGA) board capable of acquiring the information coming from a fast digitization of the signals generated in a drift chambers. The digitized signals are analyzed using an ad hoc real-time algorithm implemented in the FPGA in order to reduce the data throughput coming from the particle detector.
2015
The aim of the GAP project is the deployment of Graphic Processing Units in real-time applications, ranging from the online event selection (trigger) in High-Energy Physics to medical imaging reconstruction. The final goal of the project is to demonstrate that GPUs can have a positive impact in sectors different for rate, bandwidth, and computational intensity. Most crucial aspects currently under study are the analysis of the total latency of the system, the algorithms optimisations, and the integration with the data acquisition systems. In this paper we focus on the application of GPUs in asynchronous trigger systems, employed for the high level trigger of LHC experiments. The benefit obtained from the GPU deployement is particularly relevant for the foreseen LHC luminosity upgrade where highly selective algorithms will be crucial to maintain a sustainable trigger rates with very high pileup. As a study case, we will consider the ATLAS experimental environment and propose a GPU implementation for a typical muon selection in a high-level trigger system. 1 We perform the tests described in this article on a server set up for this purpose including an NVIDIA graphic accelerator, hence the GPU code has been developed in CUDA.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.