In order to explain the behaviour you are seeing, I need to explain a little about how Write & Improve works. Computers are not yet capable of understanding a piece of text in the same way as a human being. They do not have the same context or life experience that a human can bring to bear.

Write & Improve therefore relies upon a statistical analysis of a large number of features extracted from the text, which is then compared against the same features extracted from a large corpus of “training data” (essays from EFL students) for which we already have information about their level of proficiency.

These features act as proxies for the student's level of attainment: some are indicative of good writing, others of poor writing. Write & Improve combines these positive and negative indications together to generate the final score for a piece of writing.

Thanks to our links with Cambridge English, we are lucky enough to have a very large body of training data, which is why we can be confident that Write & Improve gives accurate results across a wide range of students’ submissions.

Unfortunately, this also means that Write & Improve is only accurate on the type of writing that it’s been trained upon. In particular, because it’s been trained on writing created by EFL students, it does not provide accurate results for the kind of writing created by fluent native speakers. The features extracted from a fluent native speaker’s writing are different, and do not appear in the training data, so Write & Improve is unable to judge them.

So, for example, if you gave Write & Improve extracts from James Joyce, or Shakespeare, or one of Churchill’s speeches, they would also be given very low marks.

Simply put, your English may be “too good” for Write & Improve to be able to judge it.

Given writing created by genuine EFL students (the kind of writing that Write & Improve has been trained upon) you will see very accurate results. If that is not the case, please do let us know.

Did this answer your question?