Open Access
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 04033
Number of page(s) 7
Section T4 - Data handling
Published online 17 September 2019
  1. Peters A and Janyst L , Exabyte scale storage at CERN (J. Phys. Conf. Ser. 2011) 331 [Google Scholar]
  2. Rundeck: Platform for Self-Service Operations [accessed: 2018–11–28] [Google Scholar]
  3. Peters A.J., Sindrilaru E.A., Bitzes G. (2017) Scaling the EOS Namespace. In: Kunkel J., Yokota R., Taufer M., Shalf J. (eds) High Performance Computing. ISC High Performance 2017. Lecture Notes in Computer Science, vol 10524. Springer Cham, [Google Scholar]
  4. A. Kiryanov et al., Harvesting Cycles on Service Nodes HEPiX Spring 2017 conference (2017) [accessed: 2018–11–28] [Google Scholar]
  5. Scalable Weakly-consistent Infection-style Process Group Membership Protocol [accessed: 2018–11–28] [Google Scholar]
  6. Lo Presti, G. et al., 2007. CASTOR: A Distributed Storage Resource Facility for High Performance Data Processing at CERN. In Proceedings of the 24th IEEE Conference on Mass Storage Systems and Technologies (September 24–27, 2007). MSST. IEEE Computer Society [Google Scholar]
  7. E Cano et al 2015 J. Phys.: Conf. Ser. 664 042007 [CrossRef] [Google Scholar]
  8. P Calafiura et al 2015 J. Phys.: Conf. Ser. 664 062065 [CrossRef] [Google Scholar]
  9. Traefik: The Cloud Native Edge Router [accessed: 2018–11–28] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.