Open Access
Issue
EPJ Web Conf.
Volume 251, 2021
25th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2021)
Article Number 03034
Number of page(s) 10
Section Offline Computing
DOI https://doi.org/10.1051/epjconf/202125103034
Published online 23 August 2021
  1. B. Worpitz, Investigating performance portability of a highly scalable particle-in-cell simulation code on various multi-core architectures (2015) [Google Scholar]
  2. E. Zenker, B. Worpitz, R. Widera, A. Huebl, G. Juckeland, A. Knüpfer, W.E. Nagel, M. Bussmann, Alpaka - An Abstraction Library for Parallel Kernel Acceleration (IEEE Computer Society, 2016), 1602.08477 [Google Scholar]
  3. A. Matthes, R. Widera, E. Zenker, B. Worpitz, A. Huebl, M. Bussmann, Tuning and optimization for a variety of many-core architectures without changing a single line of implementation code using the Alpaka library (2017), 1706.10086 [Google Scholar]
  4. H.C. Edwards, C.R. Trott, D. Sunderland, Journal of Parallel and Distributed Computing 74, 3202 (2014) [Google Scholar]
  5. D.A. Beckingsale, J. Burmark, R. Hornung, H. Jones, W. Killian, A.J. Kunen, O. Pearce, P. Robinson, B.S. Ryujin, T.R.W. Scogland, RAJA: Portable Performance for LargeScale Scientific Applications (2019), IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC), p. 71 [Google Scholar]
  6. RAJA Performance Portability Layer, https://github.com/LLNL/RAJA (2021), accessed: 2021-02-07 [Google Scholar]
  7. The Khoronos SYCL Working Group, SYCL 2020 Specification (revision 2) (2021) [Google Scholar]
  8. trisycl, https://github.com/triSYCL/triSYCL (2021), accessed: 2021-02-07 [Google Scholar]
  9. A. Alpay, V. Heuveline, SYCL beyond OpenCL: The Architecture, Current State and Future Direction of HipSYCL, in Proceedings of the International Workshop on OpenCL (Association for Computing Machinery, New York, NY, USA, 2020), IWOCL '20, https://doi.org/10.1145/3388333.3388658 [Google Scholar]
  10. ComputeCpp, https://developer.codeplay.com/products/computecpp/ce/home (2021), accessed: 2021-02-07 [Google Scholar]
  11. Intel oneAPI DPC++/C++ compiler, https://software.intel.com/content/www/us/en/develop/tools/oneapi/components/dpc-compiler.html (2021), accessed: 2021-02-07 [Google Scholar]
  12. OpenMP Architecture Review Board, OpenMP Application Programming Interface, version 5.1 (2020) [Google Scholar]
  13. OpenACC-Stadnrad.org, The OpenACC Application Programming Interface, version 3.1 (2020) [Google Scholar]
  14. ISO/IEC 14882:2020, Programming languages - C++ (2020) [Google Scholar]
  15. A. Bocci, V. Innocente, M. Kortelainen, F. Pantaleo, M. Rovere, Front. Big. Data 3, 601728 (2020), 2008.13461 [Google Scholar]
  16. CMS Collaboration, JINST 3, S08004 (2008) [Google Scholar]
  17. L. Evans, P. Bryant, JINST 3, S08001 (2008) [Google Scholar]
  18. C.D. Jones, M. Paterno, J. Kowalkowski, L. Sexton-Kennedy, W. Tanenbaum, The New CMS Event Data Model and Framework, in Proceedings of International Conference on Computing in High Energy and Nuclear Physics (CHEP06) (2006) [Google Scholar]
  19. C.D. Jones, E. Sexton-Kennedy, J. Phys.: Conf. Series 513, 022034 (2014) [Google Scholar]
  20. C.D. Jones, L. Contreras, P. Gartung, D. Hufnagel, L. Sexton-Kennedy, J. Phys.: Conf. Series 664, 072026 (2015) [Google Scholar]
  21. C.D. Jones, J. Phys.: Conf. Series 898, 042008 (2017) [Google Scholar]
  22. Oneapi Threading Building Blocks, https://github.com/oneapi-src/oneTBB (2021), accessed: 2021-02-07 [Google Scholar]
  23. CMS Offline Software and Computing, Evolution of the CMS computing model towards phase-2 (2021), CMS-NOTE-2021-001, https://cds.cern.ch/record/2751565 [Google Scholar]
  24. A. Bocci, D. Dagenhart, V. Innocente, C. Jones, M. Kortelainen, F. Pantaleo, M. Rovere, EPJ Web Conf. 245, 05009 (2020) [Google Scholar]
  25. Standalone Patatrack pixel tracking, https://github.com/cms-patatrack/pixeltrack-standalone/ (2021), accessed: 2021-02-07 [Google Scholar]
  26. CMS Collaboration, TTToHadronic_TuneCP5_13TeV-powheg-pythia8 in FEVT-DEBUGHLT format for 2018 collision data. CERN Open Data Portal., DOI: 10.7483/OPENDATA.CMS.GOB0.0LEW (2019) [Google Scholar]
  27. J.P. Wellisch, C. Williams, S. Ashby, SCRAM: Software configuration and management for the LHC Computing Grid project, in Proceedings of International Conference on Computing in High Energy and Nuclear Physics (CHEP03) (2003), p. TUJP001, cs/0306014 [Google Scholar]
  28. CUB, https://nvlabs.github.io/cub/ (2021), accessed: 2021-02-07 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.