Open Access
Issue |
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
|
|
---|---|---|
Article Number | 09012 | |
Number of page(s) | 8 | |
Section | 9 - Exascale Science | |
DOI | https://doi.org/10.1051/epjconf/202024509012 | |
Published online | 16 November 2020 |
- HEP Software Foundation. “A Roadmap for HEP Software and Computing R&D for the 2020s”, HSF-CWP-2017-01, arXiv:1712.06982 physics.comp-ph (2017). [Google Scholar]
- Parnertship for Advanced Computing in Europe, http://www.prace-ri.eu. [Google Scholar]
- Exascale Computing Project, https://exascaleproject.org. [Google Scholar]
- The Worldwide LHC Computing Grid http://wlcg.web.cern.ch. [Google Scholar]
- I. Bird. “WLCG preparations for Run 3 and beyond”, 7th Scientific Computing Forum (2019) https://indico.cern.ch/event/851050/contributions/3578170/. [Google Scholar]
- CMS Offline, Software and Computing, HPC resources integration at CMS, CMSNOTE-2020-002; CERN-CMS-NOTE-2020-002. [Google Scholar]
- CMS Offline, Software and Computing, A closer collaboration between HEP Experiments and HPC centers, CMS-NOTE-2020-003 ; CERN-CMS-NOTE-2020-003. [Google Scholar]
- M. Girone, “Common challenges for HPC integration into LHC computing”, WLCG-MB-2019-01, http://wlcg-docs.web.cern.ch/wlcg-docs/technical_documents/HPC-WLCG-V2-2.pdf (2019). [Google Scholar]
- The CernVM File System, https://cernvm.cern.ch/portal/filesystem. [Google Scholar]
- O. Gutsche et al. “Bringing heterogeneity to the CMS software framework”, to be published in these proceedings. [Google Scholar]
- A. Bocci et al. “Heterogeneous reconstruction: combining an ARM processor with a GPU”, to be published in these proceedings. [Google Scholar]
- Z. Chen et al. “GPU-based Offline Clustering Algorithm for the CMS High Granularity Calorimeter”, to be published in these proceedings. [Google Scholar]
- A. Bocci et al. The CMS Patatrack Project. United States: N. p., 2019. Web. doi:10.2172/1570206, FERMILAB-SLIDES-19-010-CD. [Google Scholar]
- H. Carter Edwards et al. “Kokkos: Enabling manycore performance portability through polymorphic memory access patterns”, Journal of Parallel and Distributed Computing, Volume 74, Issue 12 (2014). [Google Scholar]
- E. Zenker et al. “Alpaka – An Abstraction Library for Parallel Kernel Acceleration”, 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Chicago, IL, (2016). [Google Scholar]
- A. Bocci et al, “Heterogeneous online reconstruction at CMS”, to be published in these proceedings. [Google Scholar]
- RUCIO project, https://rucio.cern.ch. [Google Scholar]
- J. Balcas et al. “Using the glideinWMS System as a Common Resource Provisioning Layer in CMS”, J. Phys.: Conf. Ser. 664 062031 (2015). [CrossRef] [Google Scholar]
- HTCondor public web site, https://research.cs.wisc.edu/htcondor/index.html. [Google Scholar]
- The Glidein-based Workflow Management System, https://glideinwms.fnal.gov/doc.prd/index.html. [Google Scholar]
- A. McNab et al. “Running Jobs in the Vacuum”, J. Phys.: Conf. Ser. 513 032065 (2014). [CrossRef] [Google Scholar]
- D. Spiga et al. “Exploiting private and commercial clouds to generate on-demand CMS computing facilities with DODAS”, EPJ Web of Conferences. 214. 07027 (2019). [EDP Sciences] [Google Scholar]
- S. Timm et al. “Virtual machine provisioning, code management, and data movement design for the Fermilab HEPCloud Facility”, J. Phys.: Conf. Ser. 898 052041 (2017). [CrossRef] [Google Scholar]
- J. Flix et al, “Exploiting network restricted compute resources with HTCondor: a CMS experiment experience”, to be published in these proceedings. [Google Scholar]
- A. Pérez-Calero Yzquierdo et al. “Evolution of the CMS Global Submission Infrastructure for the HL-LHC Era”, to be published in these proceedings. [Google Scholar]
- National Energy Research Scientific Computing Center (NERSC), https://www.nersc.gov/about/ [Google Scholar]
- A. Tiradani et al. “Fermilab HEPCloud Facility Decision Engine Design”, FERMILABTM-2654-CD, CS-doc-6000 (2017). [Google Scholar]
- CINECA consortium, https://www.cineca.it/en/hpc. [Google Scholar]
- T. Boccali, et al. “Extension of the INFN Tier-1 on a HPC system”, to be published in these proceedings. [Google Scholar]
- MareNostrum 4 system architecture, https://www.bsc.es/marenostrum/marenostrum/technical-information. [Google Scholar]
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.