| Issue |
EPJ Web Conf.
Volume 337, 2025
27th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2024)
|
|
|---|---|---|
| Article Number | 01275 | |
| Number of page(s) | 8 | |
| DOI | https://doi.org/10.1051/epjconf/202533701275 | |
| Published online | 07 October 2025 | |
https://doi.org/10.1051/epjconf/202533701275
A Managed Tokens Service for Securely Keeping and Distributing Grid Tokens
Computational Science and Artificial Intelligence Directorate, Fermi National Accelerator Laboratory, Batavia, IL, USA
* Corresponding author: sbhat@fnal.gov
Published online: 7 October 2025
Fermilab is transitioning authentication and authorization for grid operations to using bearer tokens based on the WLCG Common JWT (JSON Web Token) Profile. One of the functionalities that Fermilab experimenters rely on is the ability to automate batch job submission, which in turn depends on the ability to securely refresh and distribute the necessary credentials to experiment job submit points. Thus, with the transition to using tokens for grid operations, we needed to create a service that would obtain, refresh, and distribute tokens for experimenters’ use. This service would avoid the need for experimenters to be experts in obtaining their own tokens and would better protect the most sensitive long-lived credentials. Further, the service needed to be widely scalable, as Fermilab hosts many experiments, each of which would need their own credentials. To address these issues, we created and deployed a Managed Tokens service. The service is written in Go, taking advantage of that language’s native concurrency primitives to easily be able to scale operations as we onboard experiments. The service uses as its first credentials a set of kerberos keytabs, stored on the same secure machine that the Managed Tokens service runs on. These kerberos credentials allow the service to use htgettoken via condor_vault_storer to store vault tokens in the HTCondor credential managers (credds) that run on the batch system scheduler machines (HTCondor schedds); as well as downloading a local, shorter-lived copy of the vault token. The kerberos credentials are then also used to distribute copies of the locally-stored vault tokens to experiment submit points. When experimenters schedule jobs to be submitted, these distributed vault tokens are used to access a HashiCorp Vault instance (run separately from the Managed Tokens service), and previously stored refresh tokens there are used to obtain the bearer token that is submitted with the job. We will discuss here the design of the Managed Tokens service, including elaborating on certain choices we made with regards to concurrent operations, configuration, monitoring, and deployment.
© The Authors, published by EDP Sciences, 2025
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.

