Open Access
Issue
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 03057
Number of page(s) 9
Section T3 - Distributed computing
DOI https://doi.org/10.1051/epjconf/201921403057
Published online 17 September 2019
  1. T. Maeno, Journal of Physics: Conference Series 119, 062036 (2008) [CrossRef] [Google Scholar]
  2. T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine, T. Wenaus et al., Evolution of the ATLAS PanDA workload management system for exascale computational science, in Journal of Physics: Conference Series ( IOP Publishing, 2014), Vol. 513, p. 032062 [CrossRef] [Google Scholar]
  3. M. Turilli, M. Santcroos, S. Jha, A comprehensive perspective on pilot-job systems ( ACM, 2018), Vol. 51, p. 43 [Google Scholar]
  4. R.L. Henderson, Job scheduling under the portable batch system, in Workshop on Job Scheduling Strategies for Parallel Processing ( Springer, 1995), pp. 279–294 [CrossRef] [Google Scholar]
  5. D. Oleynik, S. Panitkin, M. Turilli, A. Angius, S.H. Oral, K. De, A. Klimentov, J.C. Wells, S. Jha, High-Throughput Computing on High-Performance Platforms: A Case Study, in 13th IEEE International Conference on e-Science (2017), pp. 295–304 [Google Scholar]
  6. Maui scheduler: Backfill, http://docs.adaptivecomputing.com/maui/8.2backfill.php [Google Scholar]
  7. A. Guide (2011) [Google Scholar]
  8. S. Agostinelli, J. Allison, K.a. Amako, J. Apostolakis, H. Araujo, P. Arce, M. Asai, D. Axen, S. Banerjee, G.. Barrand et al., Nuclear instruments and methods in physics research section A: Accelerators, Spectrometers, Detectors and Associated Equipment 506, 250 (2003) [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
  9. Next generation executor (NGE), https://github.com/radical-cybertools/adical.nge [Google Scholar]
  10. R. Battle, E. Benson, Web Semantics: Science, Services and Agents on the World Wide Web 6, 61 (2008) [CrossRef] [Google Scholar]
  11. RADICAL-Pilot, https://radicalpilot.readthedocs.io/en/latest/ [Google Scholar]
  12. A. Merzky, M. Turilli, M. Maldonado, S. Jha, CoRR abs/1801.01843 (2018) [Google Scholar]
  13. A. Merzky, M. Turilli, M. Maldonado, M. Santcroos, S. Jha, Using Pilot Systems to Execute Many Task Workloads on Supercomputers, in JSSPP 2018 (in conjunction with IPDPS’18) (2018), pp. 61–82 [Google Scholar]
  14. Research in advanced distributed cyberinfrastructure and applications laboratory (rad-ical), http://radical.rutgers.edu/ [Google Scholar]
  15. F. Megino, K. De, A. Klimentov, T. Maeno, P. Nilsson, D. Oleynik, S. Padolski, S. Pan-itkin, T. Wenaus, Journal of Physics: Conference Series 898, 052002 (2017) [CrossRef] [Google Scholar]
  16. V. Garonne, R. Vigne, G. Stewart, M. Barisits, M. Lassnig, C. Serfon, L. Goossens, A. Nairz, A. Collaboration et al., Rucio–The next generation of large scale distributed system for ATLAS Data Management, in Journal of Physics: Conference Series ( IOP Publishing, 2014), Vol. 513-4, p. 042021 [CrossRef] [Google Scholar]
  17. I. Foster, Globus Online: Accelerating and democratizing science through cloud-based services ( IEEE, 2011), Vol. 15, pp. 70–73 [Google Scholar]
  18. J. Towns, T. Cockerill, M. Dahan, I. Foster, K. Gaither, A. Grimshaw, V. Hazlewood, S. Lathrop, D. Lifka, G.D. Peterson et al., Computing in Science & Engineering 16, 62 (2014) [Google Scholar]
  19. Cheyenne, https://www2.cisl.ucar.edu/resources/ computational-systems/cheyenne/cheyenne [Google Scholar]
  20. About blue waters, http://www.ncsa.illinois.edu/enabling/bluewaters [Google Scholar]
  21. OLCF resources: Compute systems, https://www.olcf.ornl.gov/olcf-resources/compute-systems/ [Google Scholar]
  22. The SSH protocol, https://www.ssh.com/ssh/#sec-The-SSH-protocol [Google Scholar]
  23. W.D. Gropp, W. Gropp, E. Lusk, A. Skjellum, A.D.F.E.E. Lusk, Using MPI: portable parallel programming with the message-passing interface, Vol. 1 ( MIT press, 1999) [CrossRef] [Google Scholar]
  24. A.B. Yoo, M.A. Jette, M. Grondona, SLURM: Simple linux utility for resource management, in Workshop on Job Scheduling Strategies for Parallel Processing ( Springer, 2003), pp. 44–60 [CrossRef] [Google Scholar]
  25. S. Zhou, LSF: Load sharing in large heterogeneous distributed systems, in I Workshop on Cluster Computing (1992), Vol. 136 [Google Scholar]
  26. M. Turilli, A. Merzky, V. Balasubramanian, S. Jha, Building Blocks for Workflow System Middleware, in 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID) ( IEEE, 2018), pp. 348–349 [CrossRef] [Google Scholar]
  27. T. Goodale, S. Jha, H. Kaiser, T. Kielmann, P. Kleijer, G. Von Laszewski, C. Lee, A. Merzky, H. Rajic, J. Shalf, Computational Methods in Science and Technology 12, 7 (2006) [CrossRef] [Google Scholar]
  28. A. Merzky, O. Weidner, S. Jha, SoftwareX 1, 3 (2015) [Google Scholar]
  29. RADICAL-SAGA, https://saga-python.readthedocs.io/en/latest/ [Google Scholar]
  30. Libcloud, https://libcloud.readthedocs.io/en/latest/compute/drivers/ec2.html [Google Scholar]
  31. K. Chodorow, MongoDB: The Definitive Guide: Powerful and Scalable Data Storage (O’Reilly Media, Inc., 2013) [Google Scholar]
  32. J. Hines, OLCF testing new platform for scientific workflows, https://www.olcf.ornl.gov/2017/06/05/olcf-testing-new-platform\-for-scientific-workflows/ [Google Scholar]
  33. B. Brooks, C. Brooks, A. Mackerell et al., Journal of Computational Chemistry 30, 1545 (2009) [CrossRef] [PubMed] [Google Scholar]
  34. Running jobs on titan, https://www.olcf.ornl.gov/for-users/system-user-guides/titan/running-jobs/ [Google Scholar]
  35. D. Crooks, P. Calafiura, R. Harrington, M. Jha, T. Maeno, S. Purdie, H. Severini, S. Skipsey, V. Tsulaia, R. Walker et al., Journal of Physics: Conference Series 396, 032115 (2012) [CrossRef] [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.