Issue |
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
|
|
---|---|---|
Article Number | 11002 | |
Number of page(s) | 12 | |
Section | 11 - Plenary contributions | |
DOI | https://doi.org/10.1051/epjconf/202024511002 | |
Published online | 16 November 2020 |
https://doi.org/10.1051/epjconf/202024511002
Computing for the DUNE Long-Baseline Neutrino Oscillation Experiment
1
Department of Physics, Oregon State University, Corvallis OR, 97331
2
Scientific Computing Division, Fermilab, Batavia, IL, 60555
* e-mail: heidi.schellman@oregonstate.edu
Published online: 16 November 2020
This paper is based on a talk given at Computing in High Energy Physics in Adelaide, South Australia, Australia in November 2019. It is partially intended to explain the context of DUNE Computing for computing specialists.
The Deep Underground Neutrino Experiment (DUNE) collaboration consists of over 180 institutions from 33 countries. The experiment is in preparation now, with commissioning of the first 10kT fiducial volume Liquid Argon TPC expected over the period 2025-2028 and a long data taking run with 4 modules expected from 2029 and beyond.
An active prototyping program is already in place with a short test-beam run with a 700T, 15,360 channel prototype of single-phase readout at the Neutrino Platform at CERN in late 2018 and tests of a similar sized dual-phase detector scheduled for mid-2019. The 2018 test-beam run was a valuable live test of our computing model. The detector produced raw data at rates of up to 2GB/s. These data were stored at full rate on tape at CERN and Fermilab and replicated at sites in the UK and Czech Republic. In total 1.2 PB of raw data from beam and cosmic triggers were produced and reconstructed during the six week testbeam run.
Baseline predictions for the full DUNE detector data, starting in the late 2020’s are 30-60 PB of raw data per year. In contrast to traditional HEP computational problems, DUNE’s Liquid Argon TPC data consist of simple but very large (many GB) 2D data objects which share many characteristics with astrophysical images. This presents opportunities to use advances in machine learning and pattern recognition as a frontier user of High Performance Computing facilities capable of massively parallel processing.
© The Authors, published by EDP Sciences, 2020
This is an Open Access article distributed under the terms of the Creative Commons Attribution License 4.0, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.