Open Access
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 07015
Number of page(s) 8
Section T7 - Clouds, virtualisation & containers
Published online 17 September 2019
  1. David Anderson, Bonic: A System for public-resource computing and storage, proceedings of the 5th IEEE/ACM International Workshop on Grid Computing, 4-10 (2004) [Google Scholar]
  2. Myers, Daniel S and Bazinet, Adam L and Cummings, Michael P, Expanding the reach of Grid computing:combining Globus and BOINC-based systems, Grid computing for bioinformatics and computational biology, 71-84 (2007) [Google Scholar]
  3. Anderson, David P., et al. ”SETI@ home: an experiment in public-resource computing.” Communications of the ACM 45.11 (2002): 56-61. [Google Scholar]
  4. Abbott, B. P., et al. ”Einstein@ Home search for periodic gravitational waves in early S5 LIGO data.” Physical review d 80.4 (2009): 042003. [Google Scholar]
  5. Herr, Werner, D. I. Kaltchev, F. Schmidt, and E. McIntosh. Large Scale Beam-beam Simulations for the CERN LHC using distributed computing. No. LHC-PROJECT-Report-927. 2006. [Google Scholar]
  6. Buncic,Predrag, et al. ”CernVMa virtual software appliance for LHC applications.” Journal of Physics: Conference Series. Vol. 219. No. 4. IOP Publishing, 2010. [Google Scholar]
  7. Aguado Sanchez, Carlos, et al. ”CVMFS-a file system for the CernVM virtual appliance.” Proceedings of XII Advanced Computing and Analysis Techniques in Physics Research. 2008. [Google Scholar]
  8. C Adam-Bourdarios, D Cameron, A Filipcic, E Lancon and Wenjing Wu for the ATLAS Collaboration,ATLAS@Home:Harnessing Volunteer Computing for HEP, 21st International Conference on Computing in High Energy and Nuclear Physics, 664, 022009 2 (2015) [Google Scholar]
  9. Adam-Bourdarios, C., R. Bianchi, D. Cameron, A. Filipi, G. Isacchini, E. Lanon, Wenjing. Wu, and ATLAS Collaboration, Volunteer Computing Experience with ATLAS@Home, Journal of Physics: Conference Series, 898, 052009 5 (2017) [Google Scholar]
  10. Simone Campana, ATLAS Distributed Computing in LHC Run2, Journal, 664, 032004 3 (2015) [Google Scholar]
  11. Filipcic A, ATLAS Collaboration,ATLAS Distributed Computing Experience and Performance During the LHC Run-2, Journal of Physics: Conference Series, 895, 052015 5 (2017) [Google Scholar]
  12. Maeno T, PanDA: distributed production and distributed analysis system for ATLAS, Journal of Physics: Conference Series, 119, 062036 5 (2008) [Google Scholar]
  13. De, Kaushik and Klimentov, A and Maeno, T and Nilsson, P and Oleynik, D and Panitkin, S and Petrosyan, Artem and Schovancova, J and Vaniachine, A and Wenaus, T, The future of PanDA in ATLAS distributed computing, Journal of Physics: Conference Series, 664, 062035 6(2015) [Google Scholar]
  14. Rimoldi, A and Dell’Acqua, A and Gallas, M and Nairz, A and Boudreau, J and Tsulaia, V and Costanzo, D, The simulation for the ATLAS experiment: Present status and outlook, Nuclear Science Symposium Conference Record, 2004 IEEE, 3, 1886–1890 (2004) [Google Scholar]
  15. Yamamoto S, Shapiro M, on behalf of the ATLAS Collaboration, The simulation principle and performance of the ATLAS fast calorimeter simulation FastCaloSim, ATL-COM-PHYS-2010-838 (2010) [Google Scholar]
  16. Calafiura, Paolo and Leggett, Charles and Seuster, Rolf and Tsulaia, Vakhtang and Van Gemmeren, Peter, Running ATLAS workloads within massively parallel distributed applications using Athena Multi-Process framework (AthenaMP), Journal of Physics: Conference Series, 664, 072050 7 (2015) [Google Scholar]
  17. Kurtzer GM, Sochat V, Bauer MW (2017) Singularity: Scientific containers for mobility of compute. PLoS ONE 12(5): e0177459. [Google Scholar]
  18. Puppet: [accessed 2018-11-12] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.