Issue |
EPJ Web of Conf.
Volume 295, 2024
26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023)
|
|
---|---|---|
Article Number | 01011 | |
Number of page(s) | 8 | |
Section | Data and Metadata Organization, Management and Access | |
DOI | https://doi.org/10.1051/epjconf/202429501011 | |
Published online | 06 May 2024 |
https://doi.org/10.1051/epjconf/202429501011
Calibration and Conditions Database of the ALICE experiment in Run 3
1 CERN, Esplanade des Particules 1, 1211 Geneva 23, Switzerland
2 AGH University of Krakow, al. Adama Mickiewicza 30, 30-059 Kraków, Poland
3 ”Politehnica” University of Bucharest, Splaiul Independen¸tei no. 313, sector 6, Bucharest, Romania
* e-mail: daniel.dosaru@upb.ro
** e-mail: costin.grigoras@cern.ch
*** e-mail: rafal.mucha@cern.ch
**** e-mail: trzebuniak.michal@gmail.com
Published online: 6 May 2024
The ALICE experiment at CERN has undergone a substantial detector, readout and software upgrade for the LHC Run 3. A signature part of the upgrade is the triggerless detector readout, which necessitates a real time lossy data compression from 1.1 TB/s to 100 GB/s performed on a GPU/CPU cluster of 250 nodes. To perform this compression, a significant part of the software, which traditionally is considered off-line, was moved to the front-end of the experiment data acquisition system, for example the detector tracking. This is the case also for the various configuration and conditions databases of the experiment, which are now replaced with a single homogeneous service, serving both the real-time compression, online data quality checks and the subsequent secondary data passes, Monte-Carlo simulation and data analysis.
The new service is called CCDB (for Calibration and Conditions Database). It receives, stores and distributes objects and their metadata, created from online detector calibration tasks and control systems, from offline (Grid) workflows or by users. CCDB propagates the new objects in real time to the Online cluster and asynchronously replicates all content to Grid storage elements for later access by Grid jobs or by collaboration members. The access to the metadata and objects is done via a REST API and a ROOT-based C++ client interface which streamlines the interaction with this service from compiled code while plain curl command line calls are a simple access alternative.
In this paper we will present the architecture and implementation details of the components that manage frequent updates of objects with millisecond-resolution intervals of validity and how we have achieved an independent operation of the Online cluster while also making all objects available to Grid computing nodes.
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.