EXPLAINABLE AI IN LANGUAGE ASSESSMENT: INTERPRETING MACHINE LEARNING MODELS FOR ESL WRITING FEEDBACK
DOI:
https://doi.org/10.63878/jalt2082Abstract
This paper explored the use of explainable artificial intelligence (XAI) in ESL writing assessment, by creating an interpretable machine learning model which can produce scores and explanatory feedback. The accuracy of the models and perceptions of the users were determined using a mixed-methods design. The research was done at The University of Lahore and the University of Management and Technology, Lahore with the help of purposive sampling where 100 ESL students and 10 teachers were taken. Human and AI scoring of essays was used to get quantitative data, and questionnaires and semi-structured interviews were used to get qualitative data. The results demonstrated that AI-generated scores and human ratings were significantly positive (r = .86, p < .01), which means that the model is highly reliable. The feedback was also seen by the participants as easy to understand, interpret, and pedagogically valuable, but some feelings of trust were not eliminated. The research proposes that explainable AI has the potential to improve the instructional quality and the transparency of automated writing assessment ESL.References
Ahmad, N. R. (2025). Strategic agility in crisis: How Pakistani businesses adapt financially to global disruptions and market shocks. Journal of Business Resilience and Strategic Management, 4(2), 88–102.
Ahmad, N. R. (2025). Sustainable business strategies for achieving competitive advantage in Pakistan’s developing economy. Quarterly Review Journal of Social Sciences. https://doi.org/10.63878/qrjs361
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

