Submissions automatically rejected by plagiarism software
Steve Gardner*, a renowned researcher at the University of Glasgow is rather upset: his pioneering research on avian lung viruses has been rejected by the Journal of Airway Obstruction because it has been given a “similarity score” of 38% meaning that it is under severe suspicion of plagiarism.
Renaissance for plagiarism
At the moment, plagiarism is experiencing a renaissance, as an investigation in Science  proved. By using plagiarism detection software such as eTBLAST and the database of highly similar citations Déjà vu, the authors found 212 papers with suspected plagiarism, which were previously undetected. Plagiarism seems ubiquitous and dangerous. Not only to science itself, but to journals too. Journals fear their reputation will be damaged and therefore closely examine all submitted manuscripts – not for fraud (this is very difficult because it needs a lot of expert hours) but for plagiarism (this can be done automatically and gives you at least a good feeling). A look behind the scenes reveals that with the latter often the baby is thrown out with the bathwater.
In our all-digital world, it has become easier to produce copies – but it has also become easier to look it up. Some plagiarism detection services are available to the public such as the above mentioned tools, but recently a number of commercial products have popped up too. To support their business they make claims such as they act “to ensure the originality of written work … [and to] help editors, authors and researchers to prevent misconduct”. Like the free tools, these commercial software products are based on huge data collections of journal articles, books and websites. And here it becomes really interesting, because – as we will see later – obviously not every product knows how to manage such heterogeneous collections. Maybe they should have hired a librarian…
How to cope?
Numerous journals and entire publishing groups such as Nature, Wiley and Elsevier make use of these commercial services – driven by the fear of plagiarism and COPE. COPE is short for Committee on Publication Ethics. COPE has 7000 members worldwide and provides advice to editors and publishers on all aspects of publication ethics and how to handle cases of misconduct. For this goal, COPE publishes the Code of Conduct and Best Practice Guidelines for Journal Editors , which recommends best practices to editors, such as “to having systems in place to detect plagiarized text either for routine use or when suspicions are raised”. As a result, more and more journals set automatic routines in place, checking each and any manuscript when submitted.
How do we know that this system is working properly? To be frank: we do not know. We simply cannot know, because no journal will tell us anything about this delicate task. If the journal detects plagiarism, it is an embarrassement for both the author as well for the journal. So every journal will blurt out their rejection rates but no one will tell how many of its authors are suspected of plagiarism. Despite this ignorance, we have learned from Steve Gardner. His painful experience of being rejected for nothing made him look for support at … surprise, surprise … the library.
A closer look into the result sheets of the similiarity check iThenticate , the plagiarism detection tool used by the Journal of Airway Obstruction, revealed three serious flaws in the librarian’s eye:
First, iThenticate screened Gardner’s manuscript against a databases of conference proceedings and found a suspiciously similar abstract: plagiarism alarm! Unfortunately the database was not properly indexed, so they totally missed the point: the abstract was from the very Steve Gardner himself, presenting the preliminary findings to his colleagues.
Second, furthermore expletives and standard phrases were regarded as plagiarism such as P <0.01, high-dose, ml/kg, etc. Even references to studies such as “data from the Avian Virus Outbreak Study (AVOS) show” or manufacturer’s name such as “Fresenius Germany GmbH, Bad Homburg, Germany” elevated the plagiarism score.
The third failure must be conferred to the journal’s editorial board, who based their rejection solely on the comparison of a piece of dull, hypersensitive software and did not control its output manually.
The system is broken
All three faults are somewhat unforgivable. Everybody should have red the Science paper mentioned above and know that plagiarism detection software fails: in 98% of all cases the results are false positive. And in betwen, the author is bestowed the awful suspicion of plagiarism. Nobody from the journal will tell him how to overcome this accusation. It is a serious indictment of the scientific publishing system, when its main pillar, the author, is left standing out in the rain so much. Once, researcher and publisher shared common goals and values. This close relationship is eroded, it has been eaten away at; it has become an interdependent on careers, profits, and suspicions.
The authors are cash cows**
Scientific journals are paralyzed by the fear of the fall from the grace in the science world through fraud and plagiarism in their articles and thereby damaging their reputation. As a consequence, they put all authors under suspicion and treat them as a kind of presumably guilty petitioners. Fear is a bad counselor; the researchers will remember any bad treatment and will look for ways out of the vicious circle of commercial publishing. The once symbiotic alliance between researchers and publishers is a discontinued model. Personally I doubt very much if a kind of publishing system has a future, where researchers are only regarded as a means to make money, as a cash cow.
* The name of the author and the journal was changed because the author asked to be incognito, which in itself throws an interesting light on the (im)balance of power in the publishing system.
** In business, a cash cow is a product or a business unit that generates unusually high profit margins: so high that it is responsible for a large amount of a company’s operating profit. Wikipedia, the free encyclopedia
- Long TC, Errami E, George AC, Sun Z, Garner HR. Scientific Integrity: Responding to Possible Plagiarism. Science. 2009 March 6;323:1293-1294
This article was published in the December issue 2012 of the JEAHIL.
Picture: (c) by jameek at photocase.com