Open Access
Issue
EPJ Web Conf.
Volume 344, 2025
AI-Integrated Physics, Technology, and Engineering Conference (AIPTEC 2025)
Article Number 01036
Number of page(s) 8
Section AI-Integrated Physics, Technology, and Engineering
DOI https://doi.org/10.1051/epjconf/202534401036
Published online 22 December 2025
  1. E. Nasarian, R. Alizadehsani, U. R. Acharya, K. L. Tsui, Designing interpretable ML system to enhance trust in healthcare: A systematic review to proposed responsible clinician-AI-collaboration framework. Inf. Fusion. 108, 102412 (2024). https://doi.org/10.1016/j.inffus.2024.102412 [Google Scholar]
  2. R. Deberdt et al., Artificial intelligence and ESG in resources-intensive industries: Reviewing the use of AI in fisheries, mining, plastics, and forestry. Extr. Ind. Soc. 23, 101690 (2025),https://doi.org/10.1016/j.exis.2025.101690 [Google Scholar]
  3. A. Casheekar, A. Lahiri, K. Rath, K. S. Prabhakar, K. Srinivasan, A contemporary review on chatbots, AI-powered virtual conversational agents, ChatGPT: Applications, open challenges and future research directions. Comput. Sci. Rev. 52, 100632 (2024),https://doi.org/10.1016/j.cosrev.2024.100632 [Google Scholar]
  4. A. Farea, F. Emmert-Streib, Understanding question-answering systems: Evolution, applications, trends, and challenges. Eng. Appl. Artif. Intell. 156, 110997 (2025). https://doi.org/10.1016/j.engappai.2025.110997 [Google Scholar]
  5. X. Cheng, X. Zhang, B. Yang, Y. Fu, An investigation on trust in AI-enabled collaboration: Application of AI-Driven chatbot in accommodation-based sharing economy. Electron. Commer. Res. Appl. 54, 101164 (2022). https://doi.org/10.1016/j.elerap.2022.101164 [Google Scholar]
  6. L. Pascazio et al., “Question-answering system for combustion kinetics,” Proc. Combust. Inst., vol. 40, no.1–4, p. 105428, 2024, https://doi.org/10.1016/j.proci.2024.105428 [Google Scholar]
  7. K. Du, Y. Zhao, R. Mao, F. Xing, E. Cambria, Natural language processing in finance: A survey. Inf. Fusion. 115, 102755 (2025). https://doi.org/10.1016/j.inffus.2024.102755 [Google Scholar]
  8. P. Upadhyay, R. Agarwal, S. Dhiman, A. Sarkar, S. Chaturvedi, A comprehensive survey on answer generation methods using NLP. Nat. Lang. Process. J. 8, 100088 (2024). https://doi.org/10.1016/j.nlp.2024.100088 [Google Scholar]
  9. M. S. Salim, S. I. Hossain, T. Jalal, D. K. Bose, and M. J. I. Basher, LLM based QA chatbot builder: A generative AI-based chatbot builder for question answering. SoftwareX. 29, 102029 (2025). https://doi.org/10.1016/j.softx.2024.102029 [Google Scholar]
  10. F. Caccavale, C. L. Gargalo, J. Kager, S. Larsen, K. V. Gernaey, U. Krühne, ChatGMP: A case of AI chatbots in chemical engineering education towards the automation of repetitive tasks. Comput. Educ. Artif. Intell. 8(2025). https://doi.org/10.1016/j.caeai.2024.100354 [Google Scholar]
  11. A. Abdi, S. Hasan, M. Arshi, S. M. Shamsuddin, N. Idris, A question answering system in hadith using linguistic knowledge. Comput. Speech Lang. 60, 101023 (2020). https://doi.org/10.1016/j.csl.2019.101023 [Google Scholar]
  12. A. B. Kamran, B. Abro, A. Basharat, SemanticHadith: An ontology-driven knowledge graph for the hadith corpus. J. Web Semant. 78, 100797 (2023). https://doi.org/10.1016/j.websem.2023.100797 [CrossRef] [Google Scholar]
  13. S. S. Sazali, N. A. Rahman, Z. A. Bakar. Characteristics of Malay translated hadith corpus. J. King Saud Univ. - Comput. Inf. Sci. 34, 5, 2151–2160 (2022). https://doi.org/10.1016/j.jksuci.2020.07.011 [Google Scholar]
  14. J. J. Kim, H. U. Kim, J. Adamowski, S. Hatami, H. Jeong, Comparative study of term-weighting schemes for environmental big data using machine learning. Environ. Model. Softw. 157, 105536 (2022). https://doi.org/10.1016/j.envsoft.2022.105536 [Google Scholar]
  15. T. Dogan, A. K. Uysal, On Term Frequency Factor in Supervised Term Weighting Schemes for Text Classification. Arab. J. Sci. Eng. 44, 11, 9545-9560 (2019). https://doi.org/10.1007/s13369-019-03920- 9 [Google Scholar]
  16. M. Okkalioglu, TF-IGM revisited: Imbalance text classification with relative imbalance ratio. Expert Syst. Appl. 217, 119578 (2023). https://doi.org/10.1016/j.eswa.2023.119578 [Google Scholar]
  17. S. Deo, D. Banik, P. K. Pattnaik, Customized long short-term memory architecture for multi- document summarization with improved text feature set. Data Knowl. Eng. 159, 102440, (2025). https://doi.org/10.1016/j.datak.2025.102440 [Google Scholar]
  18. K. Peyton, S. Unnikrishnan, A comparison of chatbot platforms with the state-of-the-art sentence BERT for answering online student FAQs. Results Eng. 7, 100856 (2023). https://doi.org/10.1016/j.rineng.2022.100856 [Google Scholar]
  19. M. S. Islam, K. M. Alam, Sentiment analysis of Bangla language using a new comprehensive dataset BangDSA and the novel feature metric skipBangla-BERT. Nat. Lang. Process. J. 7, 100069 (2024). https://doi.org/10.1016/j.nlp.2024.100069 [Google Scholar]
  20. C. Aparna, K. Rajchandar, A robust solution for recognizing accurate handwritten text extraction using quantum convolutional neural network and transformer models. Comput. Electr. Eng. 120, 109794. (2024). https://doi.org/10.1016/j.compeleceng.2024.109794 [Google Scholar]
  21. J. Yang et al., BERT and hierarchical cross attention-based question answering over bridge inspection knowledge graph. Expert Syst. Appl. 233, 120896 (2023). https://doi.org/10.1016/j.eswa.2023.120896 [CrossRef] [Google Scholar]
  22. Z. Chen et al., Improving BERT with local context comprehension for multi-turn response selection in retrieval-based dialogue systems. Comput. Speech Lang. 82, 101525 (2023). https://doi.org/10.1016/j.csl.2023.101525 [Google Scholar]
  23. Z. Li, K. Xu, Y. Liang, Y. Wang, Z. Zou, DiffuStory : Improving text diffusion models for creative story generation with contrastive learning and decoder-decoder Transformers. Expert Syst. Appl. 130154,(2025). https://doi.org/10.1016/j.eswa.2025.130154 [Google Scholar]
  24. I. C. Rico, J. P. Espada, Expert system for extracting keywords in educational texts and textbooks based on transformers models. Expert Syst. Appl. 282, 127735 (2025). https://doi.org/10.1016/j.eswa.2025.127735. [Google Scholar]
  25. H. Pan, B. Teng, Z. Li, Y. Fu, L. Li, A hybrid TH- LSTM-Transformer Model for text generation from EEG signals during imagined character speech. Biomed. Signal Process. Control, 113, 10887 (2026),https://doi.org/10.1016/j.bspc.2025.108871 [Google Scholar]
  26. J. Chen, P. K. Kudjo, S. Mensah, S. A. Brown, and G. Akorfu, An automatic software vulnerability classification framework using term frequency- inverse gravity moment and feature selection. J. Syst. Softw. 167,110616, (2020). https://doi.org/10.1016/j.jss.2020.110616 [Google Scholar]
  27. J. M. Sanchez-Gomez, M. A. Vega-Rodríguez, C. J. Pérez, The impact of term-weighting schemes and similarity measures on extractive multi- document text summarization. Expert Syst. Appl. 169, 114510 (2021). https://doi.org/10.1016/j.eswa.2020.114510 [Google Scholar]
  28. Z. Tang, W. Li, Y. Li, An improved supervised term weighing scheme for text representation and classification. Expert Syst. Appl. 189, 24, 115985 (2022). https://doi.org/10.1016/j.eswa.2021.115985 [Google Scholar]
  29. J. Choi, S. W. Lee, Improving FastText with inverse document frequency of subwords. Pattern Recognit. Lett. 133, 165–172 (2020). https://doi.org/10.1016/j.patrec.2020.03.003 [Google Scholar]
  30. L. Chen, L. Jiang, C. Li, Using modified term frequency to improve term weighting for text classification. Eng. Appl. Artif. Intell. 101, 104215 (2021). https://doi.org/10.1016/j.engappai.2021.104215 [Google Scholar]
  31. A. Thakkar and K. Chaudhari, Predicting stock trend using an integrated term frequency–inverse document frequency-based feature weight matrix with neural networks. Appl. Soft Comput. J. 96, 106684 (2020). https://doi.org/10.1016/j.asoc.2020.106684 [Google Scholar]

Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.

Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.

Initial download of the metrics may take a while.