Issue |
EPJ Web Conf.
Volume 251, 2021
25th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2021)
|
|
---|---|---|
Article Number | 02054 | |
Number of page(s) | 8 | |
Section | Distributed Computing, Data Management and Facilities | |
DOI | https://doi.org/10.1051/epjconf/202125102054 | |
Published online | 23 August 2021 |
https://doi.org/10.1051/epjconf/202125102054
Distributed training and scalability for the particle clustering method UCluster
1 Department of Physics and Astronomy, Division of High Energy Physics, Uppsala University
2 Department of Information Technology, Division of Systems and Control, Uppsala University
3 Department of Information Technology, Division of Visual Information & Interaction, Uppsala University
4 Department of Mathematics, Uppsala University
5 Combient Competence Centre for Data Engineering Sciences, Uppsala University
* e-mail: olga.sunneborn.gudnadottir@cern.ch
Published online: 23 August 2021
In recent years, machine-learning methods have become increasingly important for the experiments at the Large Hadron Collider (LHC). They are utilised in everything from trigger systems to reconstruction and data analysis. The recent UCluster method is a general model providing unsupervised clustering of particle physics data, that can be easily modified to provide solutions for a variety of different decision problems. In the current paper, we improve on the UCluster method by adding the option of training the model in a scalable and distributed fashion, and thereby extending its utility to learn from arbitrarily large data sets. UCluster combines a graph-based neural network called ABCnet with a clustering step, using a combined loss function in the training phase. The original code is publicly available in TensorFlow v1.14 and has previously been trained on a single GPU. It shows a clustering accuracy of 81% when applied to the problem of multi-class classification of simulated jet events. Our implementation adds the distributed training functionality by utilising the Horovod distributed training framework, which necessitated a migration of the code to TensorFlow v2. Together with using parquet files for splitting data up between different compute nodes, the distributed training makes the model scalable to any amount of input data, something that will be essential for use with real LHC data sets. We find that the model is well suited for distributed training, with the training time decreasing in direct relation to the number of GPU’s used. However, further improvements by a more exhaustive and possibly distributed hyper-parameter search is required in order to achieve the reported accuracy of the original UCluster method.
© The Authors, published by EDP Sciences, 2021
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.