Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 07055
Number of page(s) 8
Section 7 - Facilities, Clouds and Containers
DOI https://doi.org/10.1051/epjconf/202024507055
Published online 16 November 2020
  1. S. McKee, E. Kissel, B. Meekhof, M. Swany, C. Miller, M. Gregorowicz, OSiRIS: a distributed Ceph deployment using software defined networking for multi-institutional research (2017), Vol. 898, p. 062045, http://stacks.iop.org/1742-6596/898/i=6/a=062045 [Google Scholar]
  2. S.A. Weil, S.A. Brandt, E.L. Miller, D.D.E. Long, C. Maltzahn, Ceph: A Scalable, High-performance Distributed File System, in Proceedings of the 7th Symposium on Operating Systems Design and Implementation (USENIX Association, Berkeley, CA, USA, 2006), OSDI ’06, pp. 307–320, ISBN 1-931971-47-1, http://dl.acm.org/citation.cfm?id=1298455.1298485 [Google Scholar]
  3. The Ceph Foundation, (2012, August 11), CEPH: An Open-Source, Unified Distributed Storage System, Retrieved from http://www.ceph.com [Google Scholar]
  4. E. Kissel, D. Gunter, T. Samak, A. El-Hassany, G. Fernandes, M. Swany, Tech. rep., Technical Report 2011/04, University of Delaware (2011) [Google Scholar]
  5. A. Hanemann, J. Boote, E. Boyd, J. Durand, L. Kudarimoti, R. Lapacz, M. Swany, S. Trocha, J. Zurawski, PerfSONAR: A Service Oriented Architecture for Multi-Domain Network Monitoring, in In Proceedings of the Third International Conference on Service Oriented Computing (ICSOC 2005) (2005), ACM Sigsoft and Sigweb, pp. 241–254 [Google Scholar]
  6. A. El-Hassany, E. Kissel, D. Gunter, M. Swany, Design and Implementation of a Unified Network Information Service, in 10th IEEE International Conference on Services Computing (SCC 2013) (2013) [Google Scholar]
  7. J. Musser, E. Kissel, G. Skipper, M. Swany, Multi-layer Stream Orchestration with Flange, in IEEE International Conference on Fog Computing (ICFC 2019) (2019) [Google Scholar]
  8. SLATE CI, (2019, April 3), Services Layer at the Edge, Retrieved from https://slateci.io/ [Google Scholar]
  9. J. Breen, L. Bryant, G. Carcassi, J. Chen, R.W. Gardner, R. Harden, M. Izdimirski, R. Killen, B. Kulbertis, S. McKee et al., Building the SLATE Platform, in Proceedings of the Practice and Experience on Advanced Research Computing (Association for Computing Machinery, New York, NY, USA, 2018), PEARC ’18, ISBN 9781450364461, https://doi.org/10.1145/3219104.3219144 [Google Scholar]
  10. B. Burns, B. Grant, D. Oppenheimer, E. Brewer, J. Wilkes, Queue 14, 70 (2016) [CrossRef] [Google Scholar]
  11. Linux Foundation, (2020, May 25), Kubernetes, Production-grade Container Orchestration, Retrieved from https://kubernetes.io/ [Google Scholar]
  12. C. Negus, Docker Containers, 2nd edn. (Addison-Wesley Professional, 2015), ISBN 9780134397511 [Google Scholar]
  13. Helm Authors, (2020, March 5), Helm, The Package Manager for Kubernetes, Retrieved from https://helm.sh/ [Google Scholar]
  14. I. Foster, C. Kesselman, The International Journal of Supercomputer Applications and High Performance Computing 11, 115 (1997) [CrossRef] [Google Scholar]
  15. I. Foster, C. Kesselman, The grid: blueprint for a new computing infrastructure pp. 259–278 (1999) [Google Scholar]
  16. Globus, (2020, May 18), Globus, Retrieved from https://globus.org [Google Scholar]
  17. M.J. Litzkow, M. Livny, M.W. Mutka, Tech. rep., University of Wisconsin-Madison Department of Computer Sciences (1987) [Google Scholar]
  18. J. Frey, T. Tannenbaum, M. Livny, I. Foster, S. Tuecke, Cluster Computing 5, 237 (2002) [Google Scholar]
  19. D. Thain, T. Tannenbaum, M. Livny, Concurrency and computation: practice and experience 17, 323 (2005) [CrossRef] [Google Scholar]
  20. D. Weitzel, M. Zvada, I. Vukotic, R. Gardner, B. Bockelman, M. Rynge, E.F. Hernandez, B. Lin, M. Selmeci, StashCache: A Distributed Caching Federation for the Open Science Grid, in Proceedings of the Practice and Experience in Advanced Research Computing on Rise of the Machines (Learning) (Association for Computing Machinery, New York, NY, USA, 2019), PEARC ’19, ISBN 9781450372275, https://doi.org/10.1145/3332186.3332212 [Google Scholar]
  21. M. Ragan-Kelley, F. Perez, B. Granger, T. Kluyver, P. Ivanov, J. Frederic, M. Bussonnier, The Jupyter/IPython architecture: a unified view of computational research, from interactive exploration to communication and publication., in AGU Fall Meeting Abstracts (2014) [Google Scholar]
  22. T. Kluyver, B. Ragan-Kelley, F. Pérez, B.E. Granger, M. Bussonnier, J. Frederic, K. Kelley, J.B. Hamrick, J. Grout, S. Corlay et al., Jupyter Notebooks-a publishing format for reproducible computational workflows., in ELPUB (2016), pp. 87–90 [Google Scholar]
  23. R. Pordes, D. Petravick, B. Kramer, D. Olson, M. Livny, A. Roy, P. Avery, K. Blackburn, T. Wenaus, F. Würthwein et al., The open science grid, in Journal of Physics: Conference Series (IOP Publishing, 2007), Vol. 78, p. 012057 [CrossRef] [Google Scholar]
  24. M. Altunay, P. Avery, K. Blackburn, B. Bockelman, M. Ernst, D. Fraser, R. Quick, R. Gardner, S. Goasguen, T. Levshina et al., A Science Driven Production Cyberinfrastructure – the Open Science Grid (Springer Netherlands, 2011), Vol. 9, pp. 201–218, ISSN 1570-7873, http://dx.doi.org/10.1007/s10723-010-9176-6 [Google Scholar]
  25. Internet2, (2020, June 2), Internet2, Retrieved from https://www.internet2.edu [Google Scholar]
  26. Internet2, (2013, October 30), Internet2 advanced layer 2 services, Retrieved from https://www.internet2.edu/products-services/advanced-networking/layer-2-services/ [Google Scholar]
  27. ZoF project, “OpenFlow Python3 Microframework“ [software], version 0.19.0, Available from https://github.com/byllyfish/zof [accessed 2019-02-04] [Google Scholar]
  28. N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, J. Turner, ACM SIGCOMM Computer Communication Review 38, 69 (2008) [CrossRef] [Google Scholar]
  29. Ryu SDN Framework Community, (2013, June 3), Ryu SDN Framework, Retrieved from https://ryu-sdn.org/ [Google Scholar]
  30. The Ceph Foundation, (2015, March 13), Benchmark Ceph Cluster Performance, Retrieved from https://tracker.ceph.com/projects/ceph/wiki/Benchmark_Ceph_Cluster_Performance [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.