Open Access
Issue
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 04047
Number of page(s) 5
Section T4 - Data handling
DOI https://doi.org/10.1051/epjconf/201921404047
Published online 17 September 2019
  1. I. Bird, Computing for the Large Hadron Collider, Annu. Rev. Nucl. Part. S. 61 99–118 (2011) [Google Scholar]
  2. T. Maeno, PanDA: distributed production and distributed analysis system for ATLAS, J. Phys. Conf. Ser. 006 062036 (2008) [Google Scholar]
  3. C. Serfon, et al., Rucio, the next-generation Data Management system in ATLAS, Nucl. Part. Phys. Proc. 273–275 969–975 (2016) [Google Scholar]
  4. R. Gardner et al., Data federation strategies for ATLAS using XRootD, J. Phys. Conf. Ser. 513 042049 (2013) [Google Scholar]
  5. L. Bauerdick et al., Using Xrootd to Federate Regional Storage, J. Phys. Conf. Ser. 396 042009 (2012) [Google Scholar]
  6. XRootD project page: http://www.xrootd.org/ [Accessed: 25-2-2019] [Google Scholar]
  7. L. Bauerdick et al., XRootd, disk-based, caching proxy for optimization of data access, data placement and data replication, J. Phys. Conf. Ser. 513 042044 (2014) [Google Scholar]
  8. ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3 S08003 (2008) [Google Scholar]
  9. CMS Collaboration, The CMS experiment at the CERN LHC, JINST 3 S08004 (2008) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.