Open Access
Issue
EPJ Web Conf.
Volume 245, 2020
24th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2019)
Article Number 11004
Number of page(s) 10
Section 11 - Plenary contributions
DOI https://doi.org/10.1051/epjconf/202024511004
Published online 16 November 2020
  1. http://www.top500.org [Google Scholar]
  2. NVIDIA (2017), https://images.nvidia.com/content/volta-architecture/pdf/volta-architecture-whitepaper.pdf [Google Scholar]
  3. E. Lindahl (ed.), The scientific case for computing in Europe 2018-2026 (2018), https://prace-ri.eu/third-scientific-case/ [Google Scholar]
  4. R. Gerber, J. Hack, K. Riley, K. Antypas, R. Coffey, E. Dart, T. Straatsma, J. Wells, D. Bard, S. Dosanjh et al., Crosscut report: Exascale requirements reviews, Tech. rep., United States (2018), https://www.osti.gov/servlets/purl/1417653 [Google Scholar]
  5. B. Joó, C. Jung, N.H. Christ, W. Detmold, R.G. Edwards, M. Savage, P. Shanahan, Status and future perspectives for lattice gauge theory calculations to the exascale and beyond, The European Physical Journal A 55, 199 (2019) [CrossRef] [Google Scholar]
  6. T.C. Schulthess, P. Bauer, N. Wedi, O. Fuhrer, T. Hoefler, C. Schär, Reflecting on the goal and baseline for exascale computing: A roadmap based on weather and climate simulations, Computing in Science Engineering 21, 30 (2019) [CrossRef] [Google Scholar]
  7. https://eurohpc-ju.europa.eu [Google Scholar]
  8. Y. Lu, Paving the way for china exascale computing, CCF Transactions on High Performance Computing 1, 63 (2019) [CrossRef] [Google Scholar]
  9. K. Bergman, S. Borkar, D. Campbell, W. Carlson, W. Dally, M. Denneau, P. Franzon, W. Harrod, J. Hiller, S. Karp et al., ExaScale Computing Study: Technology Challenges in Achieving Exascale Systems (2008), http://www.cse.nd.edu/Reports/2008/TR-2008-13.pdf [Google Scholar]
  10. L. Chang, D. Frank, R. Montoye, S. Koester, B. Ji, P. Coteus, R. Dennard, W. Haensch, Practical strategies for power-efficient computing technologies, Proceedings of the IEEE 98, 215 (2010) [Google Scholar]
  11. T. Yoshida, Fujitsu high performance CPU for the Post-K computer (2018), http://www.fujitsu.com/global/documents/solutions/business-technology/tc/catalog/20180821hotchips30.pdf [Google Scholar]
  12. https://www.european-processor-initiative.eu/ [Google Scholar]
  13. N. Stephens, S. Biles, M. Boettcher, J. Eapen, M. Eyole, G. Gabrielli, M. Horsnell, G. Magklis, A. Martinez, N. Premillieu et al., The ARM Scalable Vector Extension, IEEE Micro 37, 26 (2017) [CrossRef] [Google Scholar]
  14. A. Sodani, R. Gramunt, J. Corbal, H. Kim, K. Vinod, S. Chinthamani, S. Hutsell, R. Agarwal, Y. Liu, Knights landing: Second-generation Intel Xeon Phi product, IEEE Micro 36, 34 (2016) [CrossRef] [Google Scholar]
  15. J. Bent, G. Grider, B. Kettering, A. Manzanares, M. McClelland, A. Torres, A. Torrez, Storage challenges at Los Alamos National Lab, in Mass Storage Systems and Technologies (MSST), 2012 IEEE 28th Symposium on (2012), pp. 1–5, ISSN 2160-195X [Google Scholar]
  16. W. Schenck, S.E. Sayed, M. Foszczynski, W. Homberg, D. Pleiter, Evaluation and performance modeling of a burst buffer solution, Operating Systems Review 50, 12 (2016) [Google Scholar]
  17. S. Narasimhamurthy, N. Danilov, S. Wu, G. Umanesan, S.W.D. Chien, S. Rivas-Gomez, I.B. Peng, E. Laure, S. de Witt, D. Pleiter et al., The SAGE project: A storage centric approach for exascale computing: Invited paper, p. 287–292 (2018) [Google Scholar]
  18. B. Alverson, E. Froese, L. Kaplan, D. Roweth, Cray XC series network, Tech. rep. (2012), https://www.cray.com/sites/default/files/resources/CrayXCNetwork.pdf [Google Scholar]
  19. S. Scott (2019), https://hoti.org/hoti26/slides/sscott.pdf [Google Scholar]
  20. A. Shpiner, Z. Haramaty, S. Eliad, V. Zdornov, B. Gafni, E. Zahavi, Dragonfly+: Low Cost Topology for Scaling Datacenters, in 2017 IEEE 3rd International Workshop on High-Performance Interconnection Networks in the Exascale and Big-Data Era (HiP-INEB) (2017), pp. 1–8 [Google Scholar]
  21. https://www.openmp.org [Google Scholar]
  22. https://www.openacc.org [Google Scholar]
  23. A. Fernández, V. Beltran, X. Martorell, R.M. Badia, E. Ayguadé, J. Labarta, Task-Based Programming with OmpSs and Its Application, in Euro-Par 2014: Parallel Processing Workshops, edited by L. Lopes, J. Žilinskas, A. Costan, R.G. Cascella, G. Kecskemeti, E. Jeannot, M. Cannataro, L. Ricci, S. Benkner, S. Petit et al. (Springer International Publishing, Cham, 2014), pp. 601–612, ISBN 978-3-319-14313-2 [CrossRef] [Google Scholar]
  24. https://docs.nvidia.com/cuda [Google Scholar]
  25. https://rocm.github.io [Google Scholar]
  26. https://www.khronos.org/sycl [Google Scholar]
  27. C. Augonnet, S. Thibault, R. Namyst, StarPU: a Runtime System for Scheduling Tasks over Accelerator-Based Multicore Machines, Research Report RR-7240, INRIA (2010), https://hal.inria.fr/inria-00467677 [Google Scholar]
  28. H.C. Edwards, C.R. Trott, D. Sunderland, Kokkos: Enabling manycore performance portability through polymorphic memory access patterns, Journal of Parallel and Distributed Computing 74, 3202 (2014), domain-Specific Languages and High-Level Frameworks for High-Performance Computing [Google Scholar]
  29. D.A. Beckingsale, J. Burmark, R. Hornung, H. Jones, W. Killian, A.J. Kunen, O. Pearce, P. Robinson, B.S. Ryujin, T.R. Scogland, RAJA: Portable Performance for Large-Scale Scientific Applications, in 2019 IEEE/ACM International Workshop on Performance, Portability and Productivity in HPC (P3HPC) (2019), pp. 71–81 [CrossRef] [Google Scholar]
  30. https://www.mpi-forum.org [Google Scholar]
  31. https://upc.lbl.gov [Google Scholar]
  32. http://www.openshmem.org [Google Scholar]
  33. http://gpi-site.com [Google Scholar]
  34. P. Amini, H. Kaiser, Assessing the Performance Impact of using an Active Global Address Space in HPX: A Case for AGAS, in 2019 IEEE/ACM Third Annual Workshop on Emerging Parallel and Distributed Runtime Systems and Middleware (IPDRM) (2019), pp. 26–33 [CrossRef] [Google Scholar]
  35. T. Heller, B.A. Lelbach, K.A. Huck, J. Biddiscombe, P. Grubel, A.E. Koniges, M. Kretz, D. Marcello, D. Pfander, A. Serio et al., Harnessing billions of tasks for a scalable portable hydrodynamic simulation of the merger of two stars, The International Journal of High Performance Computing Applications 33, 699 (2019), https://doi.org/10.1177/1094342018819744 [Google Scholar]
  36. https://www.maestro-data.eu [Google Scholar]
  37. T.C. Schulthess, Programming revisited, Nature Physics 11, 369 (2015) [Google Scholar]
  38. D. Cameron, A. Filipcˇicˇ, W. Guan, V. Tsulaia, R. Walker, T. Wenaus, Exploiting opportunistic resources for ATLAS with ARC CE and the event service, Journal of Physics: Conference Series 898, 052010 (2017) [CrossRef] [Google Scholar]
  39. T. Bicer, D. Gursoy, R. Kettimuthu, I.T. Foster, B. Ren, V. De Andrede, F. De Carlo, Real-time data analysis and autonomous steering of synchrotron light source experiments, pp. 59–68 (2017) [Google Scholar]
  40. ETP4HPC, Strategic Research Agenda 4 (2020), https://www.etp4hpc.eu/sra-020.html [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.