Open Access
Issue
EPJ Web of Conf.
Volume 295, 2024
26th International Conference on Computing in High Energy and Nuclear Physics (CHEP 2023)
Article Number 12003
Number of page(s) 8
Section Quantum Computing
DOI https://doi.org/10.1051/epjconf/202429512003
Published online 06 May 2024
  1. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, L. Kaiser, I. Polosukhin, Attention Is All You Need (2023), arXiv:1706.03762 [Google Scholar]
  2. T.B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., Language Models are Few-Shot Learners (2020), arXiv:2005.14165 [Google Scholar]
  3. LLaMA: Open and Efficient Foundation Language Models, author=Hugo Touvron and Thibaut Lavril and Gautier Izacard and Xavier Martinet and Marie-Anne Lachaux and Timothée Lacroix and Baptiste Rozière and Naman Goyal and Eric Hambro and Faisal Azhar and Aurelien Rodriguez and Armand Joulin and Edouard Grave and Guillaume Lample (2023), arXiv:2302.13971 [Google Scholar]
  4. J. Devlin, M.W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding (2019), arXiv:1810.04805 [Google Scholar]
  5. G. Li, X. Zhao, X. Wang, Quantum Self-Attention Neural Networks for Text Classification (2023), arXiv:2205.05625 [Google Scholar]
  6. D.Q. Nguyen, T. Vu, A. Tuan Nguyen, BERTweet: A pre-trained language model for English Tweets, in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, edited by Q. Liu, D. Schlangen (Association for Computational Linguistics, Online, 2020), pp. 9–14, https://aclanthology.org/2020.emnlp-demos.2 [CrossRef] [Google Scholar]
  7. N.L. Tran, D.M. Le, D.Q. Nguyen, BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese (2022), arXiv:2109.09701 [Google Scholar]
  8. M.K. Eddine, A.J.P. Tixier, M. Vazirgiannis, BARThez: a Skilled Pretrained French Sequence-to-Sequence Model (2021), arXiv:2010.12321 [Google Scholar]
  9. M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettlemoyer, BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension (2019), arXiv:1910.13461 [Google Scholar]
  10. Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, R. Soricut, ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (2020), arXiv:1909.11942 [Google Scholar]
  11. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2021), arXiv:2010.11929 [Google Scholar]
  12. Y. Li, H. Mao, R. Girshick, K. He, Exploring Plain Vision Transformer Backbones for Object Detection (2022), arXiv:2203.16527 [Google Scholar]
  13. E.A. Cherrat, I. Kerenidis, N. Mathur, J. Landman, M. Strahm, Y.Y. Li, Quantum Vision Transformers (2022), arXiv:2209.08167 [Google Scholar]
  14. O. Ronneberger, P. Fischer, T. Brox, U-Net: Convolutional Networks for Biomedical Image Segmentation (2015), arXiv:1505.04597 [Google Scholar]
  15. Xanadu, qml.StronglyEntanglingLayers (2022), https://docs.pennylane.ai/en/ stable/code/api/pennylane.StronglyEntanglingLayers.html [Google Scholar]
  16. LeCun, Yann and Cortes, Corinna and Burges, Christopher J.C. (2010), http://yann. lecun.com/exdb/mnist/ [Google Scholar]
  17. A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga et al., in Advances in Neural Information Processing Systems 32, edited by H. Wallach, H. Larochelle, A. Beygelzimer, F. d’Alché Buc, E. Fox, R. Garnett (Curran Associates, Inc., 2019), pp. 8024–8035, http://papers.neurips.cc/paper/9015-pytorch-an-imperative-style-high-performance-deep-learning-library. pdf [Google Scholar]
  18. G. Cohen, S. Afshar, J. Tapson, A. van Schaik, EMNIST: an extension of MNIST to handwritten letters (2017), arXiv:1702.05373 [Google Scholar]
  19. PyTorch, Vision Transformer (2022), https://pytorch.org/vision/master/ models/vision_transformer.html [Google Scholar]
  20. V. Bergholm, J. Izaac, M. Schuld, C. Gogolin, S. Ahmed, V. Ajith, M.S. Alam, G. Alonso-Linaje, B. AkashNarayanan, A. Asadi et al., PennyLane: Automatic differentiation of hybrid quantum-classical computations (2022), arXiv:1811.04968 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.