Open Access
Issue
EPJ Web Conf.
Volume 251, 2021
25th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2021)
Article Number 03045
Number of page(s) 17
Section Offline Computing
DOI https://doi.org/10.1051/epjconf/202125103045
Published online 23 August 2021
  1. J. Alwall et al., The automated computation of tree-level and next-to-leading order differential cross sections, and their matching to parton shower simulations, JHEP07(2014)079. https://doi.org/10.1007/JHEP07(2014)079 [Google Scholar]
  2. A. Valassi, E. Yazgan, J. McFayden (eds.) et al., Challenges in Monte Carlo event generator software for High-Luminosity LHC, Comput. Softw. Big Sci. 5, 12 (2021). https://doi.org/10.1007/s41781-021-00055-1 [Google Scholar]
  3. A. Valassi, E. Yazgan, J. McFayden, Monte Carlo generator strategy towards HL-LHC, WLCG meeting with LHCC referees (2020). https://doi.org/10.5281/zenodo.4028834 [Google Scholar]
  4. Nvidia, CUDA Toolkit. https://developer.nvidia.com/cuda-toolkit [Google Scholar]
  5. K. Hagiwara et al., Fast calculation of HELAS amplitudes using graphics processing unit (GPU), Eur. Phys. J. C 66 (2010) 477. https://doi.org/10.1140/epjc/s10052-010-1276-8 [CrossRef] [EDP Sciences] [Google Scholar]
  6. K. Hagiwara et al., Calculation of HELAS amplitudes for QCD processes using graphics processing unit (GPU), Eur. Phys. J. C 70 (2010) 513. https://doi.org/10.1140/epjc/s10052-010-1465-5 [CrossRef] [EDP Sciences] [Google Scholar]
  7. K. Hagiwara et al., Fast computation of MadGraph amplitudes on graphics processing unit (GPU), Eur. Phys. J. C 73 (2013) 2608. https://doi.org/10.1140/epjc/s10052-013-2608-2 [EDP Sciences] [Google Scholar]
  8. J. Kanzaki, Monte Carlo integration on GPU, Eur. Phys. J. C 71 (2011) 1559. https://doi.org/10.1140/epjc/s10052-011-1559-8 [CrossRef] [EDP Sciences] [Google Scholar]
  9. J. Kanzaki, Application of graphics processing unit (GPU) to software in elementary particle/high energy physics field, Procedia Computer Science 4 (2011) 869. https://doi.org/10.1016/j.procs.2011.04.092 [Google Scholar]
  10. “MadGraph5_aMC@NLO on GPU” project. https://madgraph5.github.io [Google Scholar]
  11. HSF Physics Event Generator Working Group. https://hepsoftwarefoundation.org/workinggroups/generators.html [Google Scholar]
  12. 25th International Conference on Computing in High-Energy and Nuclear Physics (vCHEP2021), 17-21 May 2021. https://indico.cern.ch/event/948465 [Google Scholar]
  13. A. Valassi, Reengineering the MadGraph5_aMC@NLO Monte Carlo event generator for GPUs and vector CPUs, talk presented at vCHEP2021 (2021). https://doi.org/10.5281/zenodo.4785174andhttps://doi.org/10.17181/CERN.ESFS.PYDP [Google Scholar]
  14. S. Frixione, B. R. Webber, Matching NLO QCD computations and parton shower simulations, JHEP06(2002)029. https://doi.org/10.1088/1126-6708/2002/06/029 [Google Scholar]
  15. J. Alwall, M. Herquet, F. Maltoni, O. Mattelaer, T. Stelzer, MadGraph 5: going beyond, JHEP06(2011)128. https://doi.org/10.1007/JHEP06(2011)128 [Google Scholar]
  16. O. Mattelaer, K. Ostrolenk, Speeding up MadGraph5_aMC@NLO, MCNET-21-01 (2021). arXiv:2102.00773 [Google Scholar]
  17. C. Degrande et al., UFO - the Universal FeynRules output, Comp. Phys. Comm. 183 (2012) 1201. https://doi.org/10.1016/j.cpc.2012.01.022 [Google Scholar]
  18. A. Alloul, N. D. Christensen, C. Degrande, C. Duhr, B. Fuks, FeynRules 2.0 - A complete toolbox for tree-level phenomenology, Comp. Phys. Comm. 185 (2014) 2250. https://doi.org/10.1016/j.cpc.2014.04.012 [Google Scholar]
  19. A. Semenov, LanHEP -A package for automatic generation of Feynman rules from the Lagrangian. Version 3.2, Comp. Phys. Comm. 201 (2016) 167. https://doi.org/10.1016/j.cpc.2016.01.003 [Google Scholar]
  20. F. Staub, SARAH (2008). arXiv:0806.0538 [Google Scholar]
  21. F. Maltoni, T. Stelzer, MadEvent: automatic event generation with MadGraph, JHEP02(2003)027. https://doi.org/10.1088/1126-6708/2003/02/027 [Google Scholar]
  22. R. Kleiss, The cross section for e+e~ ! e+e~e+e~, Nucl. Phys. B 241 (1984) 61. https://doi.org/10.1016/0550-3213(84)90197-4 [Google Scholar]
  23. K. Hagiwara, D. Zeppenfeld, Helicity amplitudes for heavy lepton production in e+e~ annihilation, Nucl. Phys. B 274 (1986) 1. https://doi.org/10.1016/0550-3213(86)90615-2 [Google Scholar]
  24. K. Hagiwara, D. Zeppenfeld, Amplitudes for multi-parton processes involving a current at e+e~, e±p and hadron colliders, Nucl. Phys. B 313 (1989) 560. https://doi.org/10.1016/0550-3213(89)90397-0 [CrossRef] [Google Scholar]
  25. P. de Aquino, W. Link, F. Maltoni, O. Mattelaer, T. Stelzer, ALOHA: Automatic libraries of helicity amplitudes for Feynman diagram computations, Comp. Phys. Comm. 183 (2012) 2254. https://doi.Org/10.1016/j.cpc.2012.05.004 [Google Scholar]
  26. F. Maltoni, K. Paul, T. Stelzer, S. Willenbrock, Color-flow decomposition of QCD amplitudes, Phys. Rev. D 67 (2003) 014026. https://doi.org/10.1103/PhysRevD.67.014026 [Google Scholar]
  27. F. Halzen, A. D. Martin, Quarks and leptons: an introductory course in modern particle physics, Wiley (1984). [Google Scholar]
  28. J. P. Ellis, TikZ-Feynman: Feynman diagrams with TikZ, Comp. Phys. Comm. 210 (2017) 103. https://doi.org/10.1016/j.cpc.2016.08.019 [Google Scholar]
  29. H. Murayama, I. Watanabe, K. Hagiwara, HELAS: HELicity Amplitude Subroutines for Feynman Diagram Evaluations, KEK-Report 91-11 (1992). https://lib-extopc.kek.jp/preprints/PDF/1991/9124/9124011.pdf [Google Scholar]
  30. I. Watanabe, H. Murayama, K. Hagiwara, Evaluating Cross Sections at TeV Energy Scale by HELAS, KEK preprint 92-39 (1992). https://lib-extopc.kek.jp/preprints/PDF/1992/9227/9227039.pdf [Google Scholar]
  31. AMD, ROCm documentation: HIP Programming Guide. https://rocmdocs.amd.com/en/latest/Programming_Guides/HIP-GUIDE.html [Google Scholar]
  32. E. Zenker et al., Alpaka -An Abstraction Library for Parallel Kernel Acceleration, Proc. IEEE IPDPSW 2016, Chicago. https://doi.org/10.1109/IPDPSW.2016.50 [Google Scholar]
  33. Exascale Computing Project, Kokkos ecosystem. https://kokkos.org [Google Scholar]
  34. Khronos Group, SYCL. https://www.khronos.org/sycl [Google Scholar]
  35. “MadGraph5_aMC@NLO on GPU” project, Madgraph4Gpu code repository. https://github.com/madgraph5/madgraph4gpu [Google Scholar]
  36. “MadGraph5_aMC@NLO on GPU” project, Madgraph4Gpu code tag CHEP2021. https://doi.org/10.5281/zenodo.5087381 [Google Scholar]
  37. MadGraph5_aMC@NLO, Official code repository. https://launchpad.net/mg5amcnlo [Google Scholar]
  38. Nvidia, cuRAND: the CUDA random number generation library. https://developer.nvidia.com/curand [Google Scholar]
  39. R. H. Kleiss, W. J. Stirling, S. D. Ellis, A new Monte Carlo treatment of multiparticle phase space at high energies, Comp. Phys. Comm. 40 (1986) 359. https://doi.org/10.1016/0010-4655(86)90119-0 [Google Scholar]
  40. Google, GoogleTest. https://github.com/google/googletest [Google Scholar]
  41. E. Boos et al., Generic User Process Interface for Event Generators, Proc. Physics at TeV colliders Workshop, Les Houches (2001). arXiv:hep-ph/0109068 [Google Scholar]
  42. J. Alwall et al., A standard format for Les Houches Event Files, Comp. Phys. Comm. 176 (2007) 300. https://doi.org/10.1016/j.cpc.2006.11.010 [Google Scholar]
  43. T. Sjöstrand et al., An introduction to PYTHIA 8.2, Comp. Phys. Comm. 191 (2015) 159. https://doi.org/10.1016/j.cpc.2015.01.024 [Google Scholar]
  44. S. Agostinelli et al., Geant4 — a simulation toolkit, Nucl. Instr. Meth. A 506 (2003) 250. https://doi.org/10.1016/S0168-9002(03)01368-8 [Google Scholar]
  45. Nvidia, Nvidia Tesla V100 GPU architecture. https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf [Google Scholar]
  46. Nvidia, Nsight Systems. https://developer.nvidia.com/nsight-systems [Google Scholar]
  47. Nvidia, CUDA C++ Best Practices Guide: Coalesced Access to Global Memory. https://docs.nvidia.com/cuda/cuda-c-best-practices-guide/index.htmltfcoalesced-access-to-global-memory [Google Scholar]
  48. Nvidia, Nsight Compute. https://developer.nvidia.com/nsight-compute [Google Scholar]
  49. Nvidia, Nsight Compute CLI user manual: Nvprof Transition Guide, Metric Comparison. https://docs.nvidia.com/nsight-compute/NsightComputeCli/index.htmltfnvprof-metric-comparison [Google Scholar]
  50. Nvidia, CUDA C+ + Programming Guide: Technical Specifications per Compute Capability. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#features-and-technical-specifications [Google Scholar]
  51. Nvidia, CUDA C+ + Programming Guide: CUDA Graphs. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#cuda-graphs [Google Scholar]
  52. Nvidia, CUDA C+ + Programming Guide: SIMT architecture. https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#simt-architecture [Google Scholar]
  53. GNU Compiler Collection documentation, Using Vector Instructions through Built-in Functions. https://gcc.gnu.org/onlinedocs/gcc/Vector-Extensions.html [Google Scholar]
  54. Clang documentation, Clang Language Extensions: Vectors and Extended Vectors. https://clang.llvm.org/docs/LanguageExtensions.htmltfvectors-and-extended-vectors [Google Scholar]
  55. Nvidia, Nvidia Turing GPU architecture. https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf [Google Scholar]
  56. TechPowerUp GPU database, Nvidia Tesla V100 PCIe 32 GB. https://www.techpowerup.com/gpu-specs/tesla-v100-pcie-32-gb.c3184 [Google Scholar]
  57. TechPowerUp GPU database, Nvidia Tesla T4 Specs. https://www.techpowerup.com/gpu-specs/tesla-t4.c3316 [Google Scholar]
  58. The OpenMP API specification for parallel programming, https://www.openmp.org [Google Scholar]
  59. Nvidia, Nvidia A100 Tensor Core GPU architecture. https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf [Google Scholar]
  60. S. Roiser, MG5aMC plans for GPUs and vectorization, HSF generator WG meeting (May 2021). https://hepsoftwarefoundation.org/organization/2021/05/06/generators.html [Google Scholar]
  61. Wikipedia, Amdahl's law. https://en.wikipedia.org/wiki/Amdahl%27s_law [Google Scholar]
  62. G. M. Amdahl, Validity of the Single Processor Approach to Achieving LargeScale Computing Capabilities, AFIPS Conference Proceedings (30): 483–485. https://doi.org/10.1145/1465482.1465560 [Google Scholar]
  63. S. Carrazza, J. M. Cruz-Martinez, VegasFlow: accelerating Monte Carlo simulation across multiple hardware platforms, Comp. Phys. Comm. 254 (2020) 107376. https://doi.org/10.1016/_j.cpc.2020.107376 [Google Scholar]
  64. S. Carrazza, J. M. Cruz-Martinez, M. Rossi, PDFFlow: parton distribution functions on GPU (2020). arXiv:2009.06635 [Google Scholar]
  65. J. M. Cruz-Martinez, MadFlow: towards the automation of Monte Carlo simulation on GPU for particle physics processes, talk presented at vCHEP2021 (2021). https://indico.cern.ch/event/948465/contributions/4324113 [Google Scholar]
  66. S. Carrazza, J. M. Cruz-Martinez, M. Rossi, M. Zaro, Towards the automation of Monte Carlo simulation on GPU for particle physics processes, to appear in Proc. vCHEP2021. arXiv:2105.10529 [Google Scholar]
  67. A. Valassi et al., Using HEP experiment workflows for the benchmarking and accounting of WLCG computing resources, Proc. CHEP2019, EPJ Web of Conf. 245, 07035 (2020). https://doi.org/10.1051/epjconf/202024507035 [Google Scholar]
  68. Miguel Fontes Medeiros, HEPiX benchmarking solution for WLCG computing resources, talk presented at vCHEP2021 (2021). https://indico.cern.ch/event/948465/contributions/4323674 [Google Scholar]
  69. S. Roiser, Progress on porting MadGraph5_aMC@NLO to GPUs, HSF/WLCG Virtual Workshop (2020). https://indico.cern.ch/event/941278/contributions/4101793 [Google Scholar]
  70. Sheffield Virtual GPU Hackathon 2020. https://gpuhack.shef.ac.uk [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.