Open Access
Issue
EPJ Web Conf.
Volume 214, 2019
23rd International Conference on Computing in High Energy and Nuclear Physics (CHEP 2018)
Article Number 04027
Number of page(s) 8
Section T4 - Data handling
DOI https://doi.org/10.1051/epjconf/201921404027
Published online 17 September 2019
  1. A. A. Alves Jr., et al., A Roadmap for HEP Software and Computing R&D for the 2020s, https://arxiv.org/abs/1712.06982 [Google Scholar]
  2. O. Gutche, et al., Big Data in HEP: A comprehensive use case study, https://arxiv.org/abs/1703.04171 [Google Scholar]
  3. E.F. Codd, A Relational Model of Data for Large Shared Data Banks, Communicationsof the ACM. 13 (6) : 377–387. doi:10.1145/362384.362685. [CrossRef] [Google Scholar]
  4. M. Giffels, Y. Guo, V. Kuznetsov, N. Magini and T. Wildish The CMS Data Management System J. Phys.: Conf. Ser. 513 (2014) 042052; doi:10.1088/1742-6596/513/4/042052 [CrossRef] [Google Scholar]
  5. V. Kuznetsov, D. Evans, S. Metson, The CMS Data Aggregation System, doi:10.1016/j.procs.2010.04.172 [Google Scholar]
  6. MongoDB document-oriented database, https://docs.mongodb.org, accessed (2019) [Google Scholar]
  7. Apache Spark, https://spark.apache.org, accessed (2019) [Google Scholar]
  8. D. Thain, T. Tannenbaum and M. Livny, Distributed computing in practice: the Condor experience Concurrency and Computation: Practice and Experience, (2005) 17 2–4 323–356 doi:10.1002/cpe.938 [Google Scholar]
  9. K Bloom, et al., Any Data, Any Time, Anywhere: Global Data Access for Science, BDC (2015), 85–91 https://arxiv.org/pdf/1508.01443.pdf [Google Scholar]
  10. X Espinal, et al., Disk storage at CERN: Handling LHC data and beyond Journal of Physics: Conference Series 513 (2014) 042017 doi:10.1088/1742-6596/513/4/042017 [CrossRef] [Google Scholar]
  11. A A Ayllon et al., FTS3: New Data Movement Service For WLCG, J. Phys. Conf. Ser. 513 (2014) 032081 [Google Scholar]
  12. V. Kuznetsov, N. Fischer, Y. Guo, The archive solution for distributed workflow management agents of the CMS experiment at LHC Computing and Software for Big Science (2018), 2: 1, doi: 10.1007/s41781-018-0005-0 [CrossRef] [Google Scholar]
  13. Apache CouchDB data-management system, http://couchdb.apache.org, accessed (2019) [Google Scholar]
  14. M Cinquilli, et al., The CMS workload management system, Journal of Physics: Con-ference Series (2012), Volume 396, Part 3 [Google Scholar]
  15. A Aimar, et al., Unified Monitoring Architecture for IT and Grid Services Journal of Physics Conference Series (2017), 898 092033, doi: 10.1088/1742-6596/898/9/092033 [Google Scholar]
  16. V. Kuznetsov, CMSSpark a general purpose framework to run CMS ex-periment workflows on HDFS/Spark platform DOI 10.5281/zenodo.1401228 https://zenodo.org/badge/latestdoi/74044584 [Google Scholar]
  17. K Albertsson, et al., Machine Learning in High Energy Physics Community White Paper https://arxiv.org/abs/1807.02876 [Google Scholar]
  18. B. Bockelman, Z. Zhang, J. Pivarski, Optimizing ROOT IO For Analysis, https://arxiv.org/abs/1711.02659 [Google Scholar]
  19. J. Pivarski, P. Elmer, B. Bockelman, Z. Zhang, Fast Access to Columnar, Hierarchical Data via Code Transformation https://arxiv.org/pdf/1708.08319.pdf [Google Scholar]
  20. V. Kuznetsov, TensorFlow as a Service doi: 10.5281/zenodo.1308048 [Google Scholar]
  21. DIANA-HEP Scikit-hep uproot library, Minimalist ROOT I/O in pure Python and Numpy, https://github.com/scikit-hep/uproot, accessed (2019) [Google Scholar]
  22. M. Meoni, V. Kuznetsov, L. Menichetti, J. Rumševicčius, T. Boccali, D. Bona-corsi, Exploiting Apache Spark platform for CMS computing analytics, ACAT (2017), http://arxiv.org/abs/1711.00552 [Google Scholar]
  23. HEPiX Benchmark and Performance group, https://w3.hepix.org/benchmarking.html, accessed (2019) [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.