Open Access
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 08004
Number of page(s) 8
Section T8 - Networks & facilities
Published online 17 September 2019
  1. B. O’Brian, R. Walker, A. Washbrook, Leveraging HPC resources for High Energy Physics, in Proceedings of CHEP2013 (2013), p. 032104 [Google Scholar]
  2. D. Cameron, A. Filipcic, W. Guan, V. Tsulaia, R. Walker, T. Wenaus, Exploiting opportunistic resources for ATLAS with ARC CE and the Event Service, in Proceedings of CHEP2017 (2017), p. 052010 [Google Scholar]
  3. A. Filipcic, Integraton of the Chinese HPC Grid in ATLAS Distributed Computing, in Proceedings of CHEP2017 (2017), p. 082008 [Google Scholar]
  4. A. Filipcic, S. Haug, M. Hostettler, R. Walker, M. Weber, ATLAS computing on CSCS HPC, in Proceedings of CHEP2015 (2015), p. 092011 [Google Scholar]
  5. D. Hufnagel, CMS use of allocation based HPC resources, in Proceedings of CHEP2017 (2017), p. 092050 [Google Scholar]
  6. G. Erli et al., On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers, in Proceedings of CHEP2017 (2017), p. 052021 [Google Scholar]
  7. M. Fasel, Using NERSC High-Performance Computing (HPC) systems for high-energy nuclear physics applications with ALICE, in Proceedings of ACAT2016 (2016), p. 012031 [Google Scholar]
  8. W. Bhimji et al., Extreme I/O on HPC for HEP using the Burst Bu↵er at NERSC, in Proceedings of CHEP2017 (2017), p. 082015 [Google Scholar]
  9. L. Gerhardt, W. Bhimji, S. Canon, M. Fasel, D. Jacobsen, M. Mustafa, J. Porter, V. Tsu-laia, Shifter: Containers for HPC, in Proceedings of CHEP2017 (2017), p. 082021 [Google Scholar]
  10. C. Hollowell, J. Barnett, C. Caramarcu, W. Streckerkellogg, A. Wong, A. Zaytsev, Mixing HTC and HPC Workloads with HTCondor and Slurm, in Proceedings of CHEP2017 (2017), p. 082014 [Google Scholar]
  11. Htcondor user manual,, online, accessed 18–Nov– 2018 [Google Scholar]
  12. Slurm spank plugins,, online, accessed 16–Feb– 2019 [Google Scholar]
  13. Slurm spank header file,, online, accessed 16–Feb– 2019 [Google Scholar]
  14. Slurm high throughput computing,, online, accessed 16–Feb– 2019 [Google Scholar]
  15. Docker security document,, online, accessed 16–Feb– 2019 [Google Scholar]
  16. Slurm container document,, online, accessed 16–Feb– 2019 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.