Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 07035
Number of page(s) 9
Section 7 - Facilities, Clouds and Containers
DOI https://doi.org/10.1051/epjconf/202024507035
Published online 16 November 2020
  1. S. Campana, Computing challenges of the future, Update of European Strategy for Particle Physics, Grenada (2019). https://indico.cern.ch/event/808335/contributions/3365192 [Google Scholar]
  2. I. Bird, LHC computing (WLCG): Past, present, and future, Proc. Int. School of Physics “Enrico Fermi”, Varenna (2014). https://doi.org/10.3254/978-1-61499-643-9-1 [Google Scholar]
  3. I. Bird, WLCG status report, CERN-RRB-2019-123, WLCG RRB, October 2019. https://indico.cern.ch/event/843657/contributions/3542198 [Google Scholar]
  4. P. K. Sinervo, Computing resources scrutiny group report, CERN-RRB-2019-080, WLCG RRB, October 2019. https://indico.cern.ch/event/843657/contributions/3542201 [Google Scholar]
  5. J. Dongarra, J. L. Martin, J. Worlton, Computer benchmarking: paths and pitfalls, IEEE Spectrum 24, 38-43 (1987). https://doi.org/10.1109/MSPEC.1987.6448963 [Google Scholar]
  6. HEP Software Foundation, A Roadmap for HEP Software and Computing R&D for the 2020s, Comput. Softw. Big Sci. 3, 7 (2019). https://doi.org/10.1007/s41781-018-0018-8 [CrossRef] [Google Scholar]
  7. HEPiX Benchmarking WG web site. https://w3.hepix.org/benchmarking.html [Google Scholar]
  8. P. Charpentier, Benchmarking worker nodes using LHCb productions and comparing with HEP-SPEC06, Proc. CHEP2016, San Francisco, J. Phys. Conf. Ser. 898, 082011 (2017). https://doi.org/10.1088/1742-6596/898/8/082011 [CrossRef] [Google Scholar]
  9. E. McIntosh, Benchmarking computers for HEP, 15th CERN School of Computing, L’Aquila (1992), CERN-CN-92-13. https://doi.org/10.5170/CERN-1993-003.186 [Google Scholar]
  10. K. M. Dixit, Overview of the SPEC Benchmarks, in Jim Gray (Ed.), The Benchmark Handbook for Database and Transaction Systems (2nd Edition), Morgan Kaufmann 1993. https://jimgray.azurewebsites.net/benchmarkhandbook/toc.htm [Google Scholar]
  11. J. Dongarra, P. Luszczek, A. Petitet, The LINPACK Benchmark: past, present and future, Conc. Comp. Pract. Exper. 15, 803-820 (2003). https://doi.org/10.1002/cpe.728 [CrossRef] [Google Scholar]
  12. H. J. Curnow, B. A. Wichmann, A synthetic benchmark, The Computer Journal 19, 43-49 (1976). https://doi.org/10.1093/comjnl/19.1.43 [Google Scholar]
  13. R. P. Weicker, Dhrystone: a synthetic systems programming benchmark, Comm. ACM 27, 1013-1030 (1984). https://doi.org/10.1145/358274.358283 [CrossRef] [Google Scholar]
  14. Standard Performance Evaluation Corporation (SPEC) web site. https://spec.org [Google Scholar]
  15. M. Michelotto et al., A comparison of HEP code with SPEC benchmarks on multicore worker nodes, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 052009 (2010). https://doi.org/10.1088/1742-6596/219/5/052009 [CrossRef] [Google Scholar]
  16. ALICE Coll., ALICE Computing TDR (2005). https://cds.cern.ch/record/832753 [Google Scholar]
  17. ATLAS Coll., ATLAS Computing TDR (2005). https://cds.cern.ch/record/837738 [Google Scholar]
  18. CMS Coll., CMS Computing TDR (2005). https://cds.cern.ch/record/838359 [Google Scholar]
  19. LHCb Coll., LHCb Computing TDR (2005). https://cds.cern.ch/record/835156 [Google Scholar]
  20. J. L. Henning, SPEC CPU2006 benchmark descriptions, ACM SIGARCH Comp. Arch. News 34, 1-17 (2006). https://doi.org/10.1145/1186736.1186737 [CrossRef] [Google Scholar]
  21. M. Wong, C++ benchmarks in SPEC CPU2006, ACM SIGARCH Comp. Arch. News 35, 77-83 (2007). https://doi.org/10.1145/1241601.1241617 [CrossRef] [Google Scholar]
  22. G. Benelli et al., The CMSSW benchmarking suite: using HEP code to measure CPU performance, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 052016 (2010). https://doi.org/10.1088/1742-6596/219/5/052016 [CrossRef] [Google Scholar]
  23. S. Eranian, Perfmon2: a flexible performance monitoring interface for Linux, Proc. OLS2006, Ottawa. https://www.kernel.org/doc/ols/2006/ols2006v1-pages-269-288.pdf [Google Scholar]
  24. A. Hirstius, CPU-level performance monitoring with Perfmon, HEPiX Spring 2008, CERN. https://indico.cern.ch/event/27391/contributions/613843 [Google Scholar]
  25. A. Nowak, An update on perfmon and the struggle to get into the Linux kernel, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 042048 (2010). https://doi.org/10.1088/1742-6596/219/4/042048 [CrossRef] [Google Scholar]
  26. D. Giordano et al., Next Generation of HEP CPU Benchmarks, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 08011 (2019). https://doi.org/10.1051/epjconf/201921408011 [CrossRef] [Google Scholar]
  27. D. Giordano, E. Santorinaiou, Next Generation of HEP CPU Benchmarks, Proc. ACAT2019, Saas Fee. https://indico.cern.ch/event/708041/contributions/3276257 [Google Scholar]
  28. S. Muralidharan, D. Smith, Trident: An Automated System Tool for Collecting and Analyzing Performance Counters, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 08024 (2019). https://doi.org/10.1051/epjconf/201921408024 [CrossRef] [Google Scholar]
  29. A. Yasin, A Top-Down method for performance analysis and counters architecture, Proc. 2014 IEEE ISPASS, Monterey. https://doi.org/10.1109/ISPASS.2014.6844459 [Google Scholar]
  30. J. Elmsheuser et al., ATLAS Grid Workflow Performance Optimization, Proc. CHEP2018, Sofia, EPJ Web of Conf. 214, 03021 (2019). https://doi.org/10.1051/epjconf/201921403021 [CrossRef] [Google Scholar]
  31. HEP-Benchmarks project, https://gitlab.cern.ch/hep-benchmarks. [Google Scholar]
  32. Docker, What is a container?, https://www.docker.com/resources/what-container [Google Scholar]
  33. CernVM-FS Shrinkwrap, https://cvmfs.readthedocs.io/en/stable/cpt-shrinkwrap.html [Google Scholar]
  34. P. S. M. Teuber, Efficient unpacking of required software from CERNVM-FS, CERN Openlab Report (2019). https://doi.org/10.5281/zenodo.2574461 [Google Scholar]
  35. J. Blomer et al., Distributing LHC application software and conditions databases using the CernVM file system, Proc. CHEP2010, Taipei, J. Phys. Conf. Ser. 331, 042003 (2011). https://doi.org/10.1088/1742-6596/331/4/042003 [CrossRef] [Google Scholar]
  36. A. De Salvo, F. Brasolin, Benchmarking the ATLAS software through the Kit Validation engine, Proc. CHEP2009, Prague, J. Phys. Conf. Ser. 219, 042037 (2010). https://doi.org/10.1088/1742-6596/219/4/042037 [CrossRef] [Google Scholar]
  37. HEP-Benchmarks project: hep-workloads container registry, https://gitlab.cern.ch/hep-benchmarks/hep-workloads/container_registry [Google Scholar]
  38. G. M. Kurtzer, V. Sochat, M. W. Bauer, Singularity: Scientific containers for mobility of compute, PLoS ONE 12, e0177459 (2017). https://doi.org/10.1371/journal.pone.0177459 [Google Scholar]
  39. E. Sexton-Kennedy et al., Implementation of a Multi-threaded Framework for Largescale Scientific Applications, Proc. ACAT2014, Prague, J. Phys. Conf. Ser. 608, 012034 (2015). https://doi.org/10.1088/1742-6596/608/1/012034 [CrossRef] [Google Scholar]
  40. P. Calafiura et al., Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework, Proc. CHEP2015, Okinawa, J. Phys. Conf. Ser. 664, 072050 (2015). https://doi.org/10.1088/1742-6596/664/7/072050 [CrossRef] [Google Scholar]
  41. A. Valassi, Overview of the GPU efforts for WLCG production workloads, Pre-GDB on benchmarking, CERN (2019). https://indico.cern.ch/event/739897/contributions/3559134 [Google Scholar]
  42. R. De Maria et al., SixTrack Version 5, Proc. IPAC2019, Melbourne. J. Phys. Conf. Ser. 1350, 012129 (2019). https://doi.org/10.1088/1742-6596/1350/1/012129 [CrossRef] [Google Scholar]
  43. A. Bocci, Heterogeneous online reconstruction at CMS, to appear in Proc. CHEP2019, Adelaide. https://indico.cern.ch/event/773049/contributions/3474336 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.