Recently, the use of pedagogical ICT tools has been attracting attention, driven by the shift to online classes to prevent the spread of COVID-19. In the area of English writing instruction, automated writing evaluation (AWE) has been introduced at universities. The benefits of using AWE for proofreading English, in particular, include to provide consistent feedback on syntactic and lexical errors in a short period of time (Dikli, 2010), the immediacy of the feedback, which keeps learners motivated to revise their texts, and thus better quality of their final products (Stevenson & Phakiti, 2014). However, verification of the efficacy of these AWE feedback on reduction in a specific number of errors and on expected increase in satisfaction and motivation of both learners and the teachers using AWE have yet to be elucidated.
In an aim to identify how technology can enhance students' feedback practice to produce better texts compared to conventional ways of teachers providing comments manually, we used the online-generated feedback, Turnitin Feedback Studio (https://www.turnitin.com/)(TFS, hereafter), to the classes in the Life Science Department in Ritsumeikan University in Japan. Students are engaged in science projects, and their results are demonstrated with papers in English up to 2000 words. TFS detect syntactic, lexical and mechanic errors instantly in students' papers, but it does not provide suggestions for revision. Students are required to consider how to revise their texts by themselves. Under this condition, three research questions were formulated; 1) Do TFS assist students better to improve their texts than their teachers' manual feedback? 2) In which conditions, do students' texts improve, revise errors individually, assisted by peers, or assisted by teachers? 3) Does the use of TFS improve students' and teachers' satisfaction and motivation over the feedback practice? As for 1), we examined a control group (without any assistance), a group assisted by TFS, and a group with teacher feedback. As for 2), we examined a control group, a group using Peer Mark Review (peer feedback function in TFS) in pairs with different proficiency levels of English, and a group assisted by their teachers for revision. The number of errors were compared before and after the treatment. As for 3), pre and post surveys were administered and compared, followed by semi-structured interviews with both students and teachers.
Preliminary results show that while students alone could not revise untreatable errors (e.g., choice of words, mechanics as detected by Ferris, 2011), peer/teacher helped more in correcting grammar mistakes. Overall satisfaction and increased motivation toward the use of TFS were noted among the groups of students with positive comments in the post survey. It was also found that TFS can reduce the amount of marking that teachers have to do, and can focus on meaning-oriented aspects of contents/logics as they allow AWE to mark on syntax/lexis errors beforehand. Limitations of AWE (e.g., pattern-matching feedback, inability of individually-tailored feedback) will be discussed in this paper; however, the authors believe that this study will showcase how technology can make a positive difference in teaching English writing, including better learner experience and better learning outcomes.
Dikli, S. (2010). The nature of automated essay scoring feedback. CALICO Journal, 28(1), 99–134.
Ferris, D. (2011). Treatment of error in second language student writing. The University of Michigan Press.
Stevenson, M., & Phakiti, A. (2014). The effect of computer-generated feedback on the quality of writing. Assessing Writing, 19, 51–65. https://doi.org/10.1016/j.asw.2013.11.007