CAN ARTIFICIAL INTELLIGENCE CHALLENGE UNIVERSAL GRAMMAR? A THEORY-DRIVEN EMPIRICAL INVESTIGATION

Authors

  • Muhammad Afaq MS English Linguistics, Department of English, Kohat University of Science and Technology Kohat Author
  • Tariq Mehmood PhD Scholar, Kohat University of Science and Technology Kohat Author
  • Muhammad Owais Ayaz Lecturer in English, Kohat University of Science and Technology, Kohat Author

DOI:

https://doi.org/10.63878/jalt1681

Keywords:

Universal Grammar, artificial intelligence, neural language models, Minimalism, syntax, linguistic competence.

Abstract

Universal Grammar (UG) has long been a foundational hypothesis in generative linguistics, proposing that human language is constrained by an innate, domain-specific cognitive system. Recent advances in artificial intelligence, particularly large neural language models, have reignited debates regarding the necessity and explanatory adequacy of UG. These models demonstrate remarkable linguistic performance despite lacking explicit grammatical representations, leading some scholars to argue that statistical learning mechanisms may render Universal Grammar theoretically redundant. This study offers a theory-driven empirical investigation into whether artificial intelligence genuinely challenges Universal Grammar or merely simulates linguistic behavior at a surface level. Drawing on Minimalist syntax, experimental findings from the generative tradition, and comparative analyses of UG-constrained and UG-violating structures, this paper argues that neural language models fail to consistently respect core grammatical constraints central to UG. The findings suggest that artificial intelligence does not falsify Universal Grammar but instead clarifies the distinction between probabilistic language modeling and human grammatical competence. The study contributes to ongoing debates at the intersection of theoretical linguistics, cognitive science, and artificial intelligence.

Downloads

Published

2025-12-27