Issue |
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
|
|
---|---|---|
Article Number | 03024 | |
Number of page(s) | 7 | |
Section | T3 - Distributed computing | |
DOI | https://doi.org/10.1051/epjconf/201921403024 | |
Published online | 17 September 2019 |
https://doi.org/10.1051/epjconf/201921403024
Running IceCube GPU simulations on Titan
Wisconsin IceCube Particle Astrophysics Center, University of Wisconsin-Madison,
222 W Washington Ave, Suite 500 WI 53703
Madison
USA
* Corresponding author: vbrik@icecube.wisc.edu
Published online: 17 September 2019
Here we report IceCube’s first experiences of running GPU simulations on the Titan supercomputer. This undertaking was non-trivial because Titan is designed for High Performance Computing (HPC) workloads, whereas IceCube’s workloads fall under the High Throughput Computing (HTC) category. In particular: (i) Titan’s design, policies, and tools are geared heavily toward large MPI applications, while IceCube’s workloads consist of large numbers of relatively small independent jobs, (ii) Titan compute nodes run Cray Linux, which is not directly compatible with IceCube software, and (iii) Titan compute nodes cannot access outside networks, making it impossible to access IceCube’s CVMFS repositories and workload management systems. This report examines our experience of packaging our application in Singularity containers and using HTCondor as the second-level scheduler on the Titan supercomputer.
© The Authors, published by EDP Sciences, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.