EXPLAINABLE AI IN LANGUAGE ASSESSMENT: INTERPRETING MACHINE LEARNING MODELS FOR ESL WRITING FEEDBACK

Authors

  • Qaisra Honey Author

DOI:

https://doi.org/10.63878/jalt2082

Abstract

This paper explored the use of explainable artificial intelligence (XAI) in ESL writing assessment, by creating an interpretable machine learning model which can produce scores and explanatory feedback. The accuracy of the models and perceptions of the users were determined using a mixed-methods design. The research was done at The University of Lahore and the University of Management and Technology, Lahore with the help of purposive sampling where 100 ESL students and 10 teachers were taken. Human and AI scoring of essays was used to get quantitative data, and questionnaires and semi-structured interviews were used to get qualitative data. The results demonstrated that AI-generated scores and human ratings were significantly positive (r = .86, p < .01), which means that the model is highly reliable. The feedback was also seen by the participants as easy to understand, interpret, and pedagogically valuable, but some feelings of trust were not eliminated. The research proposes that explainable AI has the potential to improve the instructional quality and the transparency of automated writing assessment ESL.

Downloads

Published

2026-02-15