Turnitin tool offers limited improvement in detection rate.
Machine learning software designed to help identify assignments produced by essay mills has the potential to deliver a limited improvement in detection rates, according to an Australian study.
Markers who took part in an experiment using an alpha version of Turnitin’s Authorship Investigate, which compares submissions with students’ previous work to identify anomalies, identified 59 per cent of contract cheating cases when using the tool – compared with 48 per cent without it.

Academics at Deakin University who conducted the test described the results as “very exciting”. However, other experts expressed disappointment that the improvement was not more significant.

For the experiment, detailed in Assessment & Evaluation in Higher Education, 24 experienced markers across a range of disciplines were each given 20 assignments. Each set contained 14 legitimate assignments and six that were purchased from contract cheating websites.

Once they had made an initial judgement, the markers were allowed to revise their decision on whether or not the paper was from an essay mill after using Authorship Investigate, which had accessed seven previous assignments written by each student to scrutinise their writing style. The tool compares linguistic attributes such as sentence complexity and length before warning whether elements of the submission fall outside the expected range.

Phillip Dawson, associate director of the Centre for Research in Assessment and Digital Learning at Deakin and one of the paper’s authors, said that the 11 percentage point increase in detection was “very exciting”, adding that more recent versions of Authorship Investigate that are now being used were likely to be even more effective.
He acknowledged that the improvement in detection was smaller than the 24 percentage point rise reported in an experiment conducted with Deakin colleague Wendy Sutherland-Smith – also a co-author on the latest paper – which examined the impact of improving markers’ training in detecting contract cheating.
However, the type of marker training they investigated had involved academics spending three hours examining submissions written by essay mills similar to those that they would be marking, and the pair argue that this would not always be possible when staff worked across multiple campuses and had other commitments.
“Academics often don’t follow through with accusations of contract cheating because it’s too hard and too time-consuming. [Authorship Investigate] will take some of this work out of their hands,” Dr Dawson said. “This will help markers have confidence to bring the evidence to the committee for discussion.

Read more : Anna McKie : Times Higher Education : 05 October 2019