Issue |
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
|
|
---|---|---|
Article Number | 08025 | |
Number of page(s) | 8 | |
Section | T8 - Networks & facilities | |
DOI | https://doi.org/10.1051/epjconf/201921408025 | |
Published online | 17 September 2019 |
https://doi.org/10.1051/epjconf/201921408025
Sharing server nodes for storage and computer
1
CERN,
1211 Geneva CH
2
University of Texas at Arlington, 701 South Nedderman Drive,
Arlington,
TX 76019,
USA
3
Petersburg Nuclear Physics Institute, 1, mkr. Orlova roshcha.,
Gatchina,
188300 Leningradskaya Oblast RU
4
Norwegian University of Science and Technology (NTNU),
Høgskoleringen
1, 7491 Trondheim, NO (seconded to CERN)
* e-mail: david.smith@cern.ch
Published online: 17 September 2019
Based on the observation of low average CPU utilisation of several hundred file storage servers in the EOS storage system at CERN, the Batch on EOS Extra Resources (BEER) project developed an approach to also utilise these resources for batch processing. Initial proof of concept tests showed little interference between batch and storage services on a node. Subsequently a model for production was developed and implemented. This has been deployed on part of the CERN EOS production service. The implementation and test results will be presented. The potential for additional resources at the CERN Tier-0 centre is of the order of ten thousand hardware threads in the near term, as well as being a step towards a hyper-converged infrastructure.
© The Authors, published by EDP Sciences, 2019
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.