

Why are We Running to Ban AI-assisted Writing Again? The last thing any publisher should want is to risk further embedding biases and discrimination against EAL authors. The ripple effects can result in mistrust, skepticism, and derailment of academic careers, not to mention prolonged legal battles.Įven worse, these detectors are more likely to mark work by English as an additional language (EAL) speakers as being AI-generated than their native English-speaking counterparts.
Auto conclusion writer professional#
Such incidents can cause undue emotional distress and potentially tarnish a researcher’s professional reputation. Timnit Gebru, Founder & Executive Director at The Distributed AI Research Institute (DAIR), recently shared a distressing email she received where a writer was unjustly accused of employing AI. The fallacy of AI detectors has real-world consequences. OpenAI itself shut down a project attempting to detect its own outputs! Many universities have already changed course and are looking for alternative policies.
Auto conclusion writer software#
In a guidance report published by Vanderbilt University, they note that Turnitin, their supplier for plagiarism software, originally claimed to have a 1% positive rate finding AI written works upon the launch of their AI-detection tool but then upped that rate to 4% upon wider usage and testing.Įven if those numbers improve, it wouldn’t be difficult for ill-intentioned authors to run the AI output through paraphrasing software to remove traces of the original. Universities are starting to understand the implications of these admissions and have started to take action by advising their faculty not to use the tools. Isn’t that the exact reason why faculty members are turning to these tools in the first place?! In addition, they say that instructors will need to be the ones to ‘make the final interpretation’ regarding what is created by generative AI. In a shockingly candid admission, Turnitin admitted in a recent video that their AI-detection software should be taken ‘ with a grain of salt‘. A humorous yet disturbing case of such confusion arose after a professor from Texas A&M failed his entire class after ChatGPT responded in the affirmative when he asked if it had written the papers handed in by the students. This false positive not only highlights the glaring imperfection of these detectors but also underscores the potential for pitfalls awaiting academic authors who treat these reports as authoritative. One notable case was an AI tool flagging the US Constitution as AI-produced. Studies have shown error rates of up to 9% and higher, a number way too high to live with. The imperfections of AI detectors are becoming more evident as they often misidentify genuine human-generated content. Quantifying the respective human and AI components in a specific document is challenging and often times authors will mix their own words with those suggested by the AI tool.

Already with the launch of ChatGPT, it was clear that generative AI could produce writing that successfully mimics that of humans. Large Language Models learn from human writing inputs and are built to resemble human writing in its outputs. Let’s start with some of the pragmatic issues. My opposition to scholarly publishers relying on detection tools stems from both pragmatic and ideological reasons. AI writing, on the other hand, is original in its own right (even if drawn from unoriginal sources), and can’t be easily traced to its source. However, there is a critical distinction: plagiarism simply looks for exact matches with existing works, an objective criterion that can be identified, measured, and replicated. The fundamental assumption underlying the creation of AI detection tools seems to be that AI writing should be able to be detected the same way that plagiarism is detected. While the ambition is noble, their practical implementation has seen its fair share of critical shortcomings. These detectors promise to be the gatekeepers of academic integrity by combating plagiarism and AI-generated content. The recent surge in the development of AI technologies in the realm of writing has led to the rise and proliferation of AI detectors in the academic world. AI Detection Tools to Uphold Research Integrity? If it failed the automated check, then it would be automatically rejected and he would be dishonorably kicked out of his program with two years of study going down the drain. If it passed the AI detection tool then they would review the work and give him a final grade. The university had given him one more chance to revise and resubmit his work.

He had just submitted his thesis to his university for review and it had been flagged as being written by AI. Last week I received a frantic call from a Master’s student in Austria who was inconsolable. *AI was most definitely used in writing this article
