Issue |
EPJ Web Conf.
Volume 251, 2021
25th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2021)
|
|
---|---|---|
Article Number | 02045 | |
Number of page(s) | 11 | |
Section | Distributed Computing, Data Management and Facilities | |
DOI | https://doi.org/10.1051/epjconf/202125102045 | |
Published online | 23 August 2021 |
https://doi.org/10.1051/epjconf/202125102045
First experiences with a portable analysis infrastructure for LHC at INFN
1 INFN Sezione di Perugia, Via Alessandro Pascoli 23c, 06123 Perugia (ITALY)
2 INFN Sezione di Pisa, L.go B. Pontecorvo 3, 56127 Pisa (ITALY)
3 INFN-CNAF, Viale Carlo Berti Pichat, 6/2, 40127 Bologna (ITALY)
Published online: 23 August 2021
The challenges proposed by the HL-LHC era are not limited to the sheer amount of data to be processed: the capability of optimizing the analyser's experience will also bring important benefits for the LHC communities, in terms of total resource needs, user satisfaction and in the reduction of end time to publication. At the Italian National Institute for Nuclear Physics (INFN) a portable software stack for analysis has been proposed, based on cloud-native tools and capable of providing users with a fully integrated analysis environment for the CMS experiment. The main characterizing traits of the solution consist in the user-driven design and the portability to any cloud resource provider. All this is made possible via an evolution towards a “python-based” framework, that enables the usage of a set of open-source technologies largely adopted in both cloud-native and data-science environments. In addition, a “single sign on”-like experience is available thanks to the standards-based integration of INDIGO-IAM with all the tools. The integration of compute resources is done through the customization of a JupyterHUB solution, able to spawn identity-aware user instances ready to access data with no further setup actions. The integration with GPU resources is also available, designed to sustain more and more widespread ML based workflow. Seamless connections between the user UI and batch/big data processing framework (Spark, HTCondor) are possible. Eventually, the experiment data access latency is reduced thanks to the integrated deployment of a scalable set of caches, as developed in the context of ESCAPE project, and as such compatible with the future scenarios where a data-lake will be available for the research community.
The outcome of the evaluation of such a solution in action is presented, showing how a real CMS analysis workflow can make use of the infrastructure to achieve its results.
© The Authors, published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.