Open Access
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 07035
Number of page(s) 9
Section 7 - Facilities, Clouds and Containers
Published online 16 November 2020
  1. S. Campana, Computing challenges of the future, Update of European Strategy for Particle Physics, Grenada (2019). [Google Scholar]
  2. I. Bird, LHC computing (WLCG): Past, present, and future, Proc. Int. School of Physics “Enrico Fermi”, Varenna (2014). [Google Scholar]
  3. I. Bird, WLCG status report, CERN-RRB-2019-123, WLCG RRB, October 2019. [Google Scholar]
  4. P. K. Sinervo, Computing resources scrutiny group report, CERN-RRB-2019-080, WLCG RRB, October 2019. [Google Scholar]
  5. J. Dongarra, J. L. Martin, J. Worlton, Computer benchmarking: paths and pitfalls, IEEE Spectrum 24, 38-43 (1987). [Google Scholar]
  6. HEP Software Foundation, A Roadmap for HEP Software and Computing R&D for the 2020s, Comput. Softw. Big Sci. 3, 7 (2019). [CrossRef] [Google Scholar]
  7. HEPiX Benchmarking WG web site. [Google Scholar]
  8. P. Charpentier, Benchmarking worker nodes using LHCb productions and comparing with HEP-SPEC06, Proc. CHEP2016, San Francisco, J. Phys. Conf. Ser. 898, 082011 (2017). [CrossRef] [Google Scholar]
  9. E. McIntosh, Benchmarking computers for HEP, 15th CERN School of Computing, L’Aquila (1992), CERN-CN-92-13. [Google Scholar]
  10. K. M. Dixit, Overview of the SPEC Benchmarks, in Jim Gray (Ed.), The Benchmark Handbook for Database and Transaction Systems (2nd Edition), Morgan Kaufmann 1993. [Google Scholar]
  11. J. Dongarra, P. Luszczek, A. Petitet, The LINPACK Benchmark: past, present and future, Conc. Comp. Pract. Exper. 15, 803-820 (2003). [CrossRef] [Google Scholar]
  12. H. J. Curnow, B. A. Wichmann, A synthetic benchmark, The Computer Journal 19, 43-49 (1976). [Google Scholar]
  13. R. P. Weicker, Dhrystone: a synthetic systems programming benchmark, Comm. ACM 27, 1013-1030 (1984). [CrossRef] [Google Scholar]
  14. Standard Performance Evaluation Corporation (SPEC) web site. [Google Scholar]
  15. M. Michelotto et al., A comparison of HEP code with SPEC benchmarks on multicore worker nodes, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 052009 (2010). [CrossRef] [Google Scholar]
  16. ALICE Coll., ALICE Computing TDR (2005). [Google Scholar]
  17. ATLAS Coll., ATLAS Computing TDR (2005). [Google Scholar]
  18. CMS Coll., CMS Computing TDR (2005). [Google Scholar]
  19. LHCb Coll., LHCb Computing TDR (2005). [Google Scholar]
  20. J. L. Henning, SPEC CPU2006 benchmark descriptions, ACM SIGARCH Comp. Arch. News 34, 1-17 (2006). [CrossRef] [Google Scholar]
  21. M. Wong, C++ benchmarks in SPEC CPU2006, ACM SIGARCH Comp. Arch. News 35, 77-83 (2007). [CrossRef] [Google Scholar]
  22. G. Benelli et al., The CMSSW benchmarking suite: using HEP code to measure CPU performance, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 052016 (2010). [CrossRef] [Google Scholar]
  23. S. Eranian, Perfmon2: a flexible performance monitoring interface for Linux, Proc. OLS2006, Ottawa. [Google Scholar]
  24. A. Hirstius, CPU-level performance monitoring with Perfmon, HEPiX Spring 2008, CERN. [Google Scholar]
  25. A. Nowak, An update on perfmon and the struggle to get into the Linux kernel, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 042048 (2010). [CrossRef] [Google Scholar]
  26. D. Giordano et al., Next Generation of HEP CPU Benchmarks, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 08011 (2019). [CrossRef] [Google Scholar]
  27. D. Giordano, E. Santorinaiou, Next Generation of HEP CPU Benchmarks, Proc. ACAT2019, Saas Fee. [Google Scholar]
  28. S. Muralidharan, D. Smith, Trident: An Automated System Tool for Collecting and Analyzing Performance Counters, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 08024 (2019). [CrossRef] [Google Scholar]
  29. A. Yasin, A Top-Down method for performance analysis and counters architecture, Proc. 2014 IEEE ISPASS, Monterey. [Google Scholar]
  30. J. Elmsheuser et al., ATLAS Grid Workflow Performance Optimization, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 03021 (2019). [CrossRef] [Google Scholar]
  31. HEP-Benchmarks project, [Google Scholar]
  32. Docker, What is a container?, [Google Scholar]
  33. CernVM-FS Shrinkwrap, [Google Scholar]
  34. P. S. M. Teuber, Efficient unpacking of required software from CERNVM-FS, CERN Openlab Report (2019). [Google Scholar]
  35. J. Blomer et al., Distributing LHC application software and conditions databases using the CernVM file system, Proc. CHEP2010, Taipei, J. Phys. Conf. Ser. 331, 042003 (2011). [CrossRef] [Google Scholar]
  36. A. De Salvo, F. Brasolin, Benchmarking the ATLAS software through the Kit Validation engine, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 042037 (2010). [CrossRef] [Google Scholar]
  37. HEP-Benchmarks project: hep-workloads container registry, [Google Scholar]
  38. G. M. Kurtzer, V. Sochat, M. W. Bauer, Singularity: Scientific containers for mobility of compute, PLoS ONE 12, e0177459 (2017). [CrossRef] [PubMed] [Google Scholar]
  39. E. Sexton-Kennedy et al., Implementation of a Multi-threaded Framework for Largescale Scientific Applications, Proc. ACAT2014, Prague, J. Phys. Conf. Ser. 608, 012034 (2015). [CrossRef] [Google Scholar]
  40. P. Calafiura et al., Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework, Proc. CHEP2015, Okinawa, J. Phys. Conf. Ser. 664, 072050 (2015). [CrossRef] [Google Scholar]
  41. A. Valassi, Overview of the GPU efforts for WLCG production workloads, Pre-GDB on benchmarking, CERN (2019). [Google Scholar]
  42. R. De Maria et al., SixTrack Version 5, Proc. IPAC2019, Melbourne. J. Phys. Conf. Ser. 1350, 012129 (2019). [CrossRef] [Google Scholar]
  43. A. Bocci, Heterogeneous online reconstruction at CMS, to appear in Proc. CHEP2019, Adelaide. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.