Issue |
EPJ Web of Conf.
Volume 295, 2024
26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023)
|
|
---|---|---|
Article Number | 09018 | |
Number of page(s) | 7 | |
Section | Artificial Intelligence and Machine Learning | |
DOI | https://doi.org/10.1051/epjconf/202429509018 | |
Published online | 06 May 2024 |
https://doi.org/10.1051/epjconf/202429509018
Fast Inclusive Flavour Tagging at LHCb
1 Instituto Galego de Física de Altas Enerxías (IGFAE), Universidade de Santiago de Compostela, Santiago de Compostela, Spain
2 Massachusetts Institute of Technology, Cambridge, MA, United States
3 European Organization for Nuclear Research (CERN), Geneva, Switzerland
* e-mail: claire.prouve@cern.ch
Published online: 6 May 2024
The task of identifying B meson flavor at the primary interaction point in the LHCb detector is crucial for measurements of mixing and timedependent CP violation. Flavor tagging is usually done with a small number of expert systems that find important tracks to infer the B meson flavor from. Recent advances show that replacing all of those expert systems with one ML algorithm that considers all tracks in an event yields an increase in tagging power. However, training the current classifier takes a long time and it is not suitable for use in real time triggers. In this work we present a new classifier, based on the DeepSet architecture. With the right inductive bias of permutation invariance, we achieve great speedups in training (multiple hours vs 10 minutes), a factor of 4-5 speed-up in inference for use in real time environments like the trigger and less tagging asymmetry. For the first time we investigate and compare performances of these “Inclusive Flavor Taggers” on simulation of the upgraded LHCb detector for the third run of the LHC.
© The Authors, published by EDP Sciences, 2024
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.