Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 05025
Number of page(s) 7
Section 5 - Software Development
DOI https://doi.org/10.1051/epjconf/202024505025
Published online 16 November 2020
  1. The Luigi Authors, “Luigi Documentation”, 2018, https://luigi.readthedocs.io. [Google Scholar]
  2. The CMS Collaboration, “CMS computing : Technical Design Report”, 2005, cds:838359. [Google Scholar]
  3. The ATLAS Collaboration, “ATLAS Computing : Technical Design Report”, 2005, cds:837738. [Google Scholar]
  4. Free Software Foundation, “GNU Make”, 2016, https://www.gnu.org/software/make. [Google Scholar]
  5. T. Tannenbaum, D. Wright, K. Miller and M. Livny, “Beowulf Cluster Computing with Linux: Condor A Distributed Job Scheduler”, MIT Press, Cambridge, MA, USA, 2002, ISBN: 0262692740. [Google Scholar]
  6. D. Thain, T. Tannenbaum and M. Livny, “Distributed Computing in Practice: The Condor Experience”, Concurrency and Computation: Practice and Experience 17 (2005) 323. [Google Scholar]
  7. I. Lumb and C. Smith, “Grid resource management: Scheduling Attributes and Platform LSF”, Kluwer Academic Publishers, Norwell, MA, USA, 2004, ISBN: 1402075758. [Google Scholar]
  8. M. Ellert et al., “Advanced Resource Connector middleware for lightweight computational Grids”, Future Gener. Comput. Syst. 23 (2007) 219. [Google Scholar]
  9. P. Andreetto et al., “The gLite workload management system”, J. Phys.: Conf. Ser. 119 (2008) 062007. [CrossRef] [Google Scholar]
  10. The CMS Collaboration, “LHC computing Grid : Technical Design Report”, 2005, cds:840543. [Google Scholar]
  11. A. A. Ayllon et al., “Making the most of cloud storage a toolkit for exploitation by WLCG experiments”, J. Phys.: Conf. Ser. 898 (2017) 062027, cds:2297053. [CrossRef] [Google Scholar]
  12. C. Boettiger, “An introduction to Docker for reproducible research, with examples from the R environment”, ACM SIGOPS Operating Systems Review, Special Issue on Repeatability and Sharing of Experimental Artifacts 49 (2015) 71, arXiv:1410.0846. [CrossRef] [Google Scholar]
  13. G. M. Kurtzer, V. Sochat and M. W. Bauer, “Singularity: Scientific containers for mobility of compute”, PLoS One 12 (2017) e0177459. [CrossRef] [PubMed] [Google Scholar]
  14. The DPHEP Collaboration, “Status Report of the DPHEP Study Group: Towards a Global Effort for Sustainable Data Preservation in High Energy Physics”, 2012, arXiv:1205.4667. [Google Scholar]
  15. J. Cowton et al., “Open Data and Data Analysis Preservation Services for LHC Experiments”, J. Phys.: Conf. Ser. 664 (2015) 032030, cds:2134548. [CrossRef] [Google Scholar]
  16. CMS Collaboration, “Search for tt¯H production√in the H → bb¯ decay channel with leptonic tt¯ decays in proton-proton collisions at s = 13 TeV”, JHEP 03 (2019) 026, arXiv:1804.03682. [Google Scholar]
  17. M. Rieger, “Luigi Analysis Workflows Project”, 2018, https://github.com/riga/law. [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.