EPJ Web Conf.
Volume 245, 202024th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
|Number of page(s)||9|
|Section||1 - Online and Real-time Computing|
|Published online||16 November 2020|
Mass storage interface LTSM for FAIR Phase 0 data acquisition
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Planckstr. 1, D-64291 Darmstadt, Germany
Published online: 16 November 2020
Since 2018 several FAIR Phase 0 beamtimes have been operated at GSI, Darmstadt. Here the new challenging technologies for the upcoming FAIR facility shall be tested while various physics experiments are performed with the existing GSI accelerators. One of these challenges concerns the performance, reliability, and scalability of the experiment data storage. Raw data as collected by event building software of large scale detector data acquisition has to be safely written to a mass storage system like a magnetic tape library. Besides this long term archive, it is often required to process this data as soon as possible on a high performance compute farm.
The C library LTSM (“Lightweight Tivoli Storage Management”) has been developed at the GSI IT department based on the IBM TSM software. It provides a file API that allows for writing raw listmode data files via TCP/IP sockets directly to an IBM TSM storage server. Moreover, the LTSM library offers Lustre HSM (“Hierarchical Storage Management”) capabilities for seamlessly archiving and retrieving data stored on Lustre file system and TSM server.
In spring 2019 LTSM has been employed at the FAIR Phase 0 beamtimes at GSI. For the HADES experiment LTSM was implemented into the DABC (“Data Acquisition Backbone Core”) event building software. During the 4 weeks of Ag+Ag@1.58 AGeV beam, the HADES event builders have transferred about 400 TB of data via 8 parallel 10 GbE sockets, both to the TSM archive and to the “GSI green cube” HPC farm.
For other FAIR Phase 0 experiments using the vintage MBS (“Multi Branch System”) event builders, an LTSM gateway application has been developed to connect the legacy RFIO (“Remote File I/O”) protocol of these DAQ systems with the new storage interface.
© The Authors, published by EDP Sciences, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.