Let's talk about ... PLAGIARISM (with poll)


(Stock image)

In an article in Today’s Guardian, Robert Topinka, an academic at Birkbeck University of London, describes how the plagiarism detection software at the university (Turnitin) flagged up a student’s work for plagiarism, saying it was “100% AI-generated”.

Topinka writes:

When my student contested the AI detector’s judgment, I granted the appeal. I admit to trusting the human over the machine. But the defence was also convincing, and this particular student had been consistently writing in this style long before ChatGPT came into being. Still, I was making a high-stakes call without reliable evidence. It was a distressing experience for my student, and one that is being repeated across the sector.

You can read the article from the link in the first comment below.

Plagiarism detection software is now almost universal in universities, and posters around libraries and other common spaces warn of the strict penalties that a university may impose if an assignment is found not to be the original work of the student submitting it.

What’s your experience of plagiarism detection software?
(Choose as many options as you wish.)

  • I don’t get stressed out by anti-plagiarism software
  • Every time I submit a piece of work, I die inside a little
  • My work was original but it got flagged up for plagiarism
  • I know at least 1 student on my course who has been found guilty of plagiarism
  • University penalties for plagiarism are too harsh
0 voters
4 Likes

Link to Guardian article: The software says my student cheated using AI. They say they’re innocent. Who do I believe? | Robert Topinka | The Guardian

3 Likes

I was actually curious to see what would happen if I passed my A Level history coursework through an AI checker, and some of them said none obviously bc there wasn’t ChatGPT then, but one of them said like 99% AI? Honestly it varies with how accurate they are and they shouldn’t be taken fully seriously by academic staff

2 Likes

This is a really great thing to do, @Silver! It certainly shows the nonsense of universities adopting these technologies, which of course are sold to them by private companies.

Being commercial products, none of the companies is willing to be open and transparent about how their product works, so it becomes a black box into which students feed their work, the output being fed to the academic who has to make a judgement call. In the case of the academic in this article, the student and his work was well-known to him so he was able to make the judgement that this work was little different to work that student had done in the past. But not every student will be so fortunate, not all academics will be so diligent. And what about young academics who will - for sure - be pressurised to “trust the system”.

It’s AI, what could go wrong? /s

Students need robust support to defend claims against plagiarism, and it should not be up to them to prove their innocence. (Not all will agree with this, I know.) I’ve run plagiarism hearings many times, and they are never pleasant for all concerned, but AI introduces high stakes into the equation where it is too easy for people to say that we must “trust the machine”.

2 Likes

I completely agree @philo , and as a computer science student I have some knowledge of AI not being the most accurate, it’s actually something that I’m partially trying to address in my dissertation

2 Likes

I never had this issue before as every time I submitted an essay, my plagiarism level was around or below 12%.

However, I have a pretty funny story:

My girlfriend wanted to check how accurate a plagiarism/AI checker was, so she asked ChatGPT to produce some content and slightly rephrased it afterwards. Then she inserted it in the checker and the plagiarism level was minimal. She then tested one of her teacher’s content and it was 100% made by ChatGPT :rofl: :rofl:

3 Likes

I guess you have come across a tool called Moss, which is used to detect plagiarism in computer code, @Silver. There is another tool called Mossad which provides a way of evading the detection of plagiarised code by Moss.

1. Can plagiarism detectors catch all instances of cheating? Plagiarism detectors, such as Moss, can effectively detect typical forms of plagiarism, including cover-ups and code rearrangement. However, tools like Mossad can bypass these detectors by producing code variants that evade detection, enabling mass plagiarism.
2. How does Mossad disrupt the fingerprint window of plagiarism detectors? Mossad disrupts the fingerprint window by introducing additional code lines that get optimized away. By randomly inserting code lines from within the original file, Mossad ensures that the resulting fingerprint no longer matches the original code, thus evading detection.
3. Can Mossad-generated code be distinguished from authentic code? Empirical studies have shown that Mossad-generated code is not easily distinguishable from authentic code. Teaching assistants who graded assignments containing Mossad-generated code did not report anything suspicious. This highlights the challenges faced by plagiarism detection tools in identifying Mossad-generated plagiarism.

See this article.

This is a kind of “arms race” between AI-driven plagiarism detectors, at least for code. Whether there are viable ways of using AI to defeat programs like Turnitin I don’t know, but in principle they can be developed.

2 Likes

Ha ha! Kind of Quis custodiet ipsos custodes, @alex1grig!

A few people are beginning to do work on this thorny issue but the field is moving so fast that work is often out of date by the time it’s published. Here’s one from the middle of 2023:

Scholarly Communication and Machine-Generated Text: Is it Finally AI vs AI in Plagiarism Detection?

The paper concludes by posing the question of whether we are entering an era in which AI detectors will be used to prevent AI-generated content from entering the scholarly communication process.