A MIXED-METHODS ANALYSIS OF TEACHERS’ AND STUDENTS’ PERCEPTIONS OF NLP-GENERATED WRITING FEEDBACK USING COH-METRIX
DOI:
https://doi.org/10.63878/jalt1535Abstract
This study investigates teachers’ and students’ perceptions of NLP-generated writing feedback produced through Coh-Metrix, a computational linguistic tool developed by Graesser and McNamara (2004). Although NLP feedback systems are increasingly integrated into educational contexts, little is known about how learners and instructors interpret, trust, and utilize such feedback in real writing situations. To address this gap, the study employs a mixed-methods design combining computational text analysis with qualitative interview data. Writing samples produced by students were analyzed using Coh-Metrix to generate indices of cohesion, lexical sophistication, syntactic complexity, and readability. Semi-structured interviews were conducted with both students and teachers, and the resulting transcripts were examined through Braun and Clarke’s (2006) thematic analysis. Quantitative outputs from Coh-Metrix were compared with participants’ perceptions to identify areas of alignment and mismatch between automated evaluations and human judgments. Findings are expected to reveal how the participants interpret NLP feedback, the extent to which they trust computational assessments, and the challenges they face when integrating such feedback into writing instruction or revision practices. The study contributes to a deeper understanding of how NLP tools can be effectively implemented in educational settings and provides insights for enhancing the pedagogical usefulness of automated writing feedback systems.
Downloads
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

