Skip to main navigation menu Skip to main content Skip to site footer

Philosophy and Political science

January 24, 2025; Seoul, South Korea: V International Scientific and Practical Conference «THEORETICAL AND PRACTICAL ASPECTS OF MODERN SCIENTIFIC RESEARCH»


THE PROBLEM OF UNDERSTANDING HUMAN IDEAS BY ARTIFICIAL INTELLIGENCE AND SPECIFICALLY LARGE LANGUAGE MODELS: A PHILOSOPHICAL ANALYSIS OF POSSIBILITIES AND LIMITATIONS


DOI
https://doi.org/10.36074/logos-24.01.2025.066
Published
20.02.2025

Abstract

The rapid advancement of Large Language Models (LLMs), exemplified by GPT-4, has transformed the landscape of artificial intelligence (AI) and natural language processing (NLP). These models have achieved remarkable humanlike fluency in generating coherent and contextually relevant text, raising both excitement and critical scrutiny within academia and industry [2]. Despite their linguistic prowess, fundamental questions remain about the nature of their “understanding.” Scholars highlight the tension between their statistical pattern-matching capabilities and the absence of deeper conceptual or experiential grounding, which is essential for truly understanding human language [4, 6]. This distinction is particularly salient when LLMs attempt tasks involving nuanced cultural, historical, or embodied contexts that are inherently tied to human lived experiences.

References

  1. Левченко, Є., & Штанько, В. (2024). Довіряючи невидимому: Проблема прийняття рішень нейронною мережею. Grail of Science, (35), 323–325. https://doi.org/10.36074/grail-of-science.19.01.2024.058//5
  2. Bianchini, F. (2024). Evaluating intelligence and knowledge in large language models. Topoi. https://doi.org/10.1007/s11245-024-10072-5//7
  3. Cuskley, C., Woods, R., & Flaherty, M. (2024a). The limitations of large language models for understanding human language and cognition. Open Mind, 8, 1058–1083. https://doi.org/10.1162/opmi_a_00160//4
  4. Harati, K. (2024). ChatGPT and AI-powered writing tools: Unveiling risks and ethical challenges in scientific writing. Journal of Reviews in Medical Sciences, 4(1), e42. https://doi.org/10.22034/jrms.2024.493520.1031//2
  5. Mirzadeh, I., Alizadeh, K., Shahrokhi, H., Tuzel, O., Bengio, S., & Farajtabar, M. (2024). GSM-Symbolic: Understanding the limitations of mathematical reasoning in large language models. arXiv. https://doi.org/10.48550/arXiv.2410.05229//3
  6. Mitchell, M., & Krakauer, D. C. (2023). The debate over understanding in AI’s large language models. Proceedings of the National Academy of Sciences, 120(13). https://doi.org/10.1073/pnas.2215907120//6
  7. Zhang, J. (2024). Should we fear large language models? A structural analysis of the human reasoning system for elucidating LLM capabilities and risks through the lens of Heidegger’s philosophy. arXiv. https://doi.org/10.48550/arXiv.2403.03288//1