Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 09009
Number of page(s) 13
Section 9 - Exascale Science
DOI https://doi.org/10.1051/epjconf/202024509009
Published online 16 November 2020
  1. WLCG, https://wlcg.web.cern.ch/ [Google Scholar]
  2. T. Boccali, “Computing models in high energy physics”, Reviews in Physics Volume 4, November 2019, 100034, https://doi.org/10.1016/j.revip.2019.100034 [CrossRef] [Google Scholar]
  3. CINECA Marconi, http://www.hpc.cineca.it/hardware/marconi [Google Scholar]
  4. top500, https://www.top500.org/ [Google Scholar]
  5. Omni-Path, https://newsroom.intel.com/news-releases/intel-architects-high-performance-computing-system-designs-to-bring-power-of-supercomputing-mainstream/#gs.wjj1hy [Google Scholar]
  6. PRACE, http://www.prace-ri.eu/ [Google Scholar]
  7. LHC Collaborations, “Summary of the cross-experiment HPC workshop”, https://indico.cern.ch/event/811997/attachments/1862943/3062278/HPCLHCC.docx.pdf [Google Scholar]
  8. The CMS Collaboration, “HPC resources integration at CMS”, https://cds.cern.ch/record/2707936/files/NOTE2020_002.pdf [Google Scholar]
  9. Singularity, https://singularity.lbl.gov/ [Google Scholar]
  10. SLURM, https://slurm.schedmd.com/documentation.html [Google Scholar]
  11. HTCondor, https://research.cs.wisc.edu/htcondor/ [Google Scholar]
  12. ALICE Collaboration, “Technical Design Report for the Upgrade of the Online-Offline Computing System”, CERN-LHCC-2015-006 ALICE-TDR-019, https://cds.cern.ch/record/2011297/ [Google Scholar]
  13. eXtreme DataCloud, http://www.extreme-datacloud.eu/ [Google Scholar]
  14. S. Agostinelli et al., “GEANT4 a simulation toolkit”, Nucl. Instr. Meth. A, vol. 506, no. 3, pp. 250-303, 2003. [CrossRef] [Google Scholar]
  15. T. Maeno et al., Overview of ATLAS PanDA Workload Management J. Phys. Conf. Ser. 331 [Google Scholar]
  16. A. Anysenkov et al., AGIS: Integration of new technologies used in ATLAS Distributed Computing, J. Phys. Conf. Ser. 898 [Google Scholar]
  17. Glidein WMS, https://glideinwms.fnal.gov/doc.prd/index.html [Google Scholar]
  18. J. Balcas et al., “Stability and scalability of the CMS Global Pool: Pushing HTCondor and glideinWMS to new limits”, Journal of Physics: Conference Series, Volume 898, Track 3: Distributed Computing [Google Scholar]
  19. wIntel TBB, https://github.com/intel/tbb [Google Scholar]
  20. Amdahl’s law, https://www.sciencedirect.com/topics/computer-science/amdahls-law [Google Scholar]
  21. F. Stagni et al., DiracGrid/dirac: v6r20p15 (2018), DOI: 10.5281/zenodo.1451647 [Google Scholar]
  22. F. Stagni, “Integrating LHCb workflows on HPC resources: status and strategies”, CHEP2019, Adelaide, https://indico.cern.ch/event/773049/contributions/3474807 [Google Scholar]
  23. D. Muller, “Gaussino: a Gaudi-based core simulation framework”, CHEP2019, Adelaide, https://indico.cern.ch/event/773049/contributions/3474740 [Google Scholar]
  24. S. Dal Pra et al., “Elastic CNAF DataCenter extension via opportunistic resources”, DOI: 10.22323/1.270.0031 [Google Scholar]
  25. L. Dell’Agnello et al., “INFN Tier–1: a distributed site”, The European Physical Journal Conferences 214:08002 · January 2019 DOI: 10.1051/epjconf/201921408002 [CrossRef] [Google Scholar]
  26. D. Ciangottini et al., “Distributed and On-demand Cache for CMS Experiment at LHC”, DOI: 10.1109/eScience.2018.00082 [Google Scholar]
  27. ESCAPE EU Project, https://projectescape.eu/ [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.