Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 07038
Number of page(s) 8
Section 7 - Facilities, Clouds and Containers
DOI https://doi.org/10.1051/epjconf/202024507038
Published online 16 November 2020
  1. Computing for the Large Hadron Collider, Ian Bird, Annual Review of Nuclear and Particle Science, 61, 99-118 (2011), doi:10.1146/annurev-nucl-102010-130059 [Google Scholar]
  2. A Roadmap for HEP Software and Computing R&D for the 2020s. Albrecht J., Alves A.A., Amadio G. et al., Comput Softw Big Sci 3, 7 (2019). https://doi.org/10.1007/s41781-018-0018-8 [Google Scholar]
  3. Singularity: Scientific containers for mobility of compute. Kurtzer GM, Sochat V, Bauer MW (2017), PLoS ONE 12(5): e0177459, https://doi.org/10.1371/journal.pone.0177459 [Google Scholar]
  4. Distributing LHC application software and conditions databases using the CernVM file system, J Blomer et al., 2011 J. Phys.: Conf. Ser. 331 042003, https://doi.org/10.1088/1742-6596/331/4/042003 [Google Scholar]
  5. Condor – A Distributed Job Scheduler, Todd Tannenbaum and Derek Wright and Karen Miller and Miron Livny, Beowulf Cluster Computing with Linux, MIT Press, 2001, pp. 307-350 [Google Scholar]
  6. Matchmaking Frameworks for Distributed Resource Management, Rajesh Raman, PhD thesis, University of Wisconsin, October 2000. [Google Scholar]
  7. The NorduGrid project homepage, URL http://www.nordugrid.org [accessed 2020-0309] [Google Scholar]
  8. XROOTD/TXNetFile: A Highly Scalable Architecture for Data Access in the ROOT Environment, Dorigo A, Elmer P, Furano F and Hanushevsky A, Proceedings of TELEINFO’ 0546: 1–46:6, (2005) [Google Scholar]
  9. Storage resource managers: Middleware components for grid storage, Shoshani Arie, Alex Sim and Junmin Gu., NASA Conference Publication. NASA; 1998, 2002. [Google Scholar]
  10. Dynamic Resource Extension for Data Intensive Computing with Specialized Software Environments on HPC Systems, C. Heidecker, et al., Proceedings of the 5th bwHPC Symposium, 2019, http://dx.doi.org/10.15496/publikation-29051 [Google Scholar]
  11. ForHLR: a New Tier-2 High-Performance ComputingSystem for Research, Barthel R. and S. Raffeiner (2017), Proceedings of the 3rdbwHPC-Symposium. Universitätsbibliothek Heidelberg, pp. 73–75, doi:10.11588/heibooks.308.418. [Google Scholar]
  12. Setup and commissioning of a high-throughput analysis cluster, R. Caspart, et al., Available in this proceedings. [Google Scholar]
  13. OpenStack Open Source Cloud Computing Software, URL https://www.openstack.org [accessed 2020-03-09]. [Google Scholar]
  14. TARDIS – Transparent Adaptive Resource Dynamic Integration System. Manuel Giffels, Matthias Schnepf, et al.. https://doi.org/10.5281/zenodo.2240605 [Google Scholar]
  15. Apache CloudStack – Open Source Cloudcomputing, URL https://cloudstack.apache.org/ [accessed 2020-03-10]. [Google Scholar]
  16. Moab HPC Suite, URL https://adaptivecomputing.com/cherry-services/moab-hpc/ [accessed 2020-03-10]. [Google Scholar]
  17. SLURM: Simple Linux Utility for Resource Management, Andy b. Yoo, et al., Job Scheduling Strategies for Parallel Processing pp. 44–60, 2002, Springer Berlin Heidelberg, https://doi.org/10.1007/10968987_3. [Google Scholar]
  18. SQLite Database Engine Hipp R, et. al., URL: https://www.sqlite.org [accessed 202003-10] [Google Scholar]
  19. Telegraf documentation URL: https://docs.influxdata.com/telegraf [accessed 2020-03-10] [Google Scholar]
  20. COBalD – the Opportunistic Balancing Daemon. Max Fischer, et al.. http://doi.org/10.5281/zenodo.1887872 [Google Scholar]
  21. Lightweight dynamic integration of opportunistic resources, M. Fischer, et al., Available in this proceedings. [Google Scholar]
  22. Europe’s Leading Public-Private Partnership for Cloud, URL: https://www.helixnebula.eu [accessed 2020-03-12] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.