Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 09012
Number of page(s) 8
Section 9 - Exascale Science
DOI https://doi.org/10.1051/epjconf/202024509012
Published online 16 November 2020
  1. HEP Software Foundation. “A Roadmap for HEP Software and Computing R&D for the 2020s”, HSF-CWP-2017-01, arXiv:1712.06982 physics.comp-ph (2017). [Google Scholar]
  2. Parnertship for Advanced Computing in Europe, http://www.prace-ri.eu. [Google Scholar]
  3. Exascale Computing Project, https://exascaleproject.org. [Google Scholar]
  4. The Worldwide LHC Computing Grid http://wlcg.web.cern.ch. [Google Scholar]
  5. I. Bird. “WLCG preparations for Run 3 and beyond”, 7th Scientific Computing Forum (2019) https://indico.cern.ch/event/851050/contributions/3578170/. [Google Scholar]
  6. CMS Offline, Software and Computing, HPC resources integration at CMS, CMSNOTE-2020-002; CERN-CMS-NOTE-2020-002. [Google Scholar]
  7. CMS Offline, Software and Computing, A closer collaboration between HEP Experiments and HPC centers, CMS-NOTE-2020-003 ; CERN-CMS-NOTE-2020-003. [Google Scholar]
  8. M. Girone, “Common challenges for HPC integration into LHC computing”, WLCG-MB-2019-01, http://wlcg-docs.web.cern.ch/wlcg-docs/technical_documents/HPC-WLCG-V2-2.pdf (2019). [Google Scholar]
  9. The CernVM File System, https://cernvm.cern.ch/portal/filesystem. [Google Scholar]
  10. O. Gutsche et al. “Bringing heterogeneity to the CMS software framework”, to be published in these proceedings. [Google Scholar]
  11. A. Bocci et al. “Heterogeneous reconstruction: combining an ARM processor with a GPU”, to be published in these proceedings. [Google Scholar]
  12. Z. Chen et al. “GPU-based Offline Clustering Algorithm for the CMS High Granularity Calorimeter”, to be published in these proceedings. [Google Scholar]
  13. A. Bocci et al. The CMS Patatrack Project. United States: N. p., 2019. Web. doi:10.2172/1570206, FERMILAB-SLIDES-19-010-CD. [Google Scholar]
  14. H. Carter Edwards et al. “Kokkos: Enabling manycore performance portability through polymorphic memory access patterns”, Journal of Parallel and Distributed Computing, Volume 74, Issue 12 (2014). [CrossRef] [PubMed] [Google Scholar]
  15. E. Zenker et al. “Alpaka – An Abstraction Library for Parallel Kernel Acceleration”, 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Chicago, IL, (2016). [Google Scholar]
  16. A. Bocci et al, “Heterogeneous online reconstruction at CMS”, to be published in these proceedings. [Google Scholar]
  17. RUCIO project, https://rucio.cern.ch. [Google Scholar]
  18. J. Balcas et al. “Using the glideinWMS System as a Common Resource Provisioning Layer in CMS”, J. Phys.: Conf. Ser. 664 062031 (2015). [CrossRef] [Google Scholar]
  19. HTCondor public web site, https://research.cs.wisc.edu/htcondor/index.html. [Google Scholar]
  20. The Glidein-based Workflow Management System, https://glideinwms.fnal.gov/doc.prd/index.html. [Google Scholar]
  21. A. McNab et al. “Running Jobs in the Vacuum”, J. Phys.: Conf. Ser. 513 032065 (2014). [CrossRef] [Google Scholar]
  22. D. Spiga et al. “Exploiting private and commercial clouds to generate on-demand CMS computing facilities with DODAS”, EPJ Web of Conferences. 214. 07027 (2019). [EDP Sciences] [Google Scholar]
  23. S. Timm et al. “Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility”, J. Phys.: Conf. Ser. 898 052041 (2017). [CrossRef] [Google Scholar]
  24. J. Flix et al, “Exploiting network restricted compute resources with HTCondor: a CMS experiment experience”, to be published in these proceedings. [Google Scholar]
  25. A. Pérez-Calero Yzquierdo et al. “Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era”, to be published in these proceedings. [Google Scholar]
  26. National Energy Research Scientific Computing Center (NERSC), https://www.nersc.gov/about/ [Google Scholar]
  27. A. Tiradani et al. “Fermilab HEPCloud Facility Decision Engine Design”, FERMILABTM-2654-CD, CS-doc-6000 (2017). [Google Scholar]
  28. CINECA consortium, https://www.cineca.it/en/hpc. [Google Scholar]
  29. T. Boccali, et al. “Extension of the INFN Tier-1 on a HPC system”, to be published in these proceedings. [Google Scholar]
  30. MareNostrum 4 system architecture, https://www.bsc.es/marenostrum/marenostrum/technical-information. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.