EPJ Web Conf.
Volume 251, 202125th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2021)
|Number of page(s)
|Distributed Computing, Data Management and Facilities
|23 August 2021
Seamless integration of commercial Clouds with ATLAS Distributed Computing
1 University of Texas, Arlington, TX, USA
2 California State University, Fresno, CA, USA
3 Brookhaven National Laboratory, Upton, NY, USA
4 CERN, Geneva, Switzerland
5 Bergische Universität Wuppertal, Germany
* e-mail: email@example.com
Published online: 23 August 2021
The CERN ATLAS Experiment successfully uses a worldwide distributed computing Grid infrastructure to support its physics programme at the Large Hadron Collider (LHC). The Grid workflow system PanDA routinely manages up to 700,000 concurrently running production and analysis jobs to process simulation and detector data. In total more than 500 PB of data are distributed over more than 150 sites in the WLCG and handled by the ATLAS data management system Rucio. To prepare for the ever growing data rate in future LHC runs new developments are underway to embrace industry accepted protocols and technologies, and utilize opportunistic resources in a standard way. This paper reviews how the Google and Amazon Cloud computing services have been seamlessly integrated as a Grid site within PanDA and Rucio. Performance and brief cost evaluations will be discussed. Such setups could offer advanced Cloud tool-sets and provide added value for analysis facilities that are under discussions for LHC Run-4.
© The Authors, published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.