Artificial intelligence detection software
Artificial intelligence detection software aims to determine whether some content (text, image, video or audio) was generated using artificial intelligence (AI).
As of 2023, the main examples of this are software like GPTZero, which claims to detect if text has been created by artificial intelligence and is sometimes used in colleges and universities to prevent student plagiarism. However, the reliability of such software is a topic of debate,[1] and there are concerns about the potential misapplication of AI detection software by educators.[2]
Text detection
For text, this is usually done to prevent alleged plagiarism, often by detecting repetition of words as telltale signs that a text was AI-generated (including AI hallucinations). They are often used by teachers marking their students, usually on an ad hoc basis. Following the release of ChatGPT and similar AI text generative software, many educational establishments have issued policies against the use of AI by students.[3] AI text detection software is also used by those assessing job applicants, as well as online search engines.[4]
Current detectors may sometimes be unreliable and have incorrectly marked work by humans as originating from AI[5][6][7] while failing to detect AI-generated work in other instances.[8] MIT Technology Review said that the technology "struggled to pick up ChatGPT-generated text that had been slightly rearranged by humans and obfuscated by a paraphrasing tool".[9] AI text detection software has also been shown to discriminate against non-native speakers of English.[4]
Two students from the University of California, Davis, nearly faced expulsion after their professors scanned their essays with a text detection tool called Turnitin which flagged the essays as having been generated by AI. However, following media coverage,[10] and a thorough investigation, the students were cleared of any wrongdoing.[11][12]
In April 2023, Cambridge University and other members of the Russell Group of universities, including Texas A&M (six months later),[13] opted out of Turnitin's AI text detection tool, after expressing concerns it was unreliable.[14]
In May 2023, a professor at Texas A&M University–Commerce used ChatGPT to detect whether his students' content was written by it, which ChatGPT said was the case. As such, he threatened to fail the class despite ChatGPT not being able to detect AI-generated writing.[15] No students were prevented from graduating because of the issue, and all but one student (who admitted to using the software) were exonerated from accusations of having used ChatGPT in their content.[16]
Anti text detection
There is software available designed to bypass AI text detection. In August 2023, a study was conducted by Taloni, et al. at Magna Græcia University, to test AI text detection.[17] The study tested an AI detection tool called Originality.ai,[18][19] and found it detected GPT-4 with a mean accuracy of 91.3%.
However, the tests noted that when GPT-4 texts were processed through Undetectable.ai (a tool designed to bypass text detection),[20] [21][22] the AI detector's accuracy significantly dropped to 27.8%. The study results were published in the Eye Journal on Nature.com, and ResearchGate.[23]
Some experts also believe that techniques like digital watermarking are ineffective because they can be removed or added to trigger false positives.[24]
Image, video and audio detection
A number of purported AI image detection software exists, to detect AI-generated images (for example, those originating from Midjourney or DALL-E). They are not completely reliable.[25][26] Others claim to identify video and audio deepfakes, but this technology is also not fully reliable yet either.[27] Despite debate around the efficacy of watermarking, Google DeepMind is actively developing a detection software called SynthID, which works by inserting a digital watermark that is invisible to the human eye into the pixels of an image.[28][29][30]
See also
References
- "'Don't use AI detectors for anything important,' says the author of the definitive 'AI Weirdness' blog. Her own book failed the test". Fortune. Retrieved 2023-10-21.
- "AI Content Detector Checks GPT-4, ChatGPT, Bard, & More". Retrieved 2023-08-23.
- Hern, Alex (31 December 2022). "AI-assisted plagiarism? ChatGPT bot says it has an answer for that". The Guardian. Retrieved 11 July 2023.
- Sample, Ian (10 July 2023). "Programs to detect AI discriminate against non-native English speakers, shows study". The Guardian. Retrieved 10 July 2023.
- Fowler, Geoffrey A. (2 June 2023). "Detecting AI may be impossible. That's a big problem for teachers". The Washington Post. Retrieved 10 July 2023.
- Tangermann, Victor (9 January 2023). "There's a Problem With That App That Detects GPT-Written Text: It's Not Very Accurate". Futurism. Retrieved 10 July 2023.
- "We tested a new ChatGPT-detector for teachers. It flagged an innocent student". The Washington Post. 1 April 2023. Retrieved 10 July 2023.
- Taylor, Josh (1 February 2023). "ChatGPT maker OpenAI releases 'not fully reliable' tool to detect AI generated content". The Guardian. Retrieved 11 July 2023.
- Williams, Rhiannon (7 July 2023). "AI-text detection tools are really easy to fool". MIT Technology Review. Retrieved 10 July 2023.
- "AI Detection Apps Keep Falsely Accusing Students of Cheating". Futurism. Retrieved 2023-10-21.
- Jimenez, Kayla. "Professors are using ChatGPT detector tools to accuse students of cheating. But what if the software is wrong?". USA TODAY. Retrieved 2023-10-21.
- Klee, Miles (2023-06-06). "She Was Falsely Accused of Cheating With AI -- And She Won't Be the Last". Rolling Stone. Retrieved 2023-10-21.
- Carter, Tom. "Some universities are ditching AI detection software amid fears students could be falsely accused of cheating by using ChatGPT". Business Insider. Retrieved 2023-10-21.
- Staton, Bethan (3 April 2023). "Universities express doubt over tool to detect AI-powered plagiarism". Financial Times. Retrieved 10 July 2023.
- Verma, Prashnu (18 May 2023). "A professor accused his class of using ChatGPT, putting diplomas in jeopardy". The Washington Post. Retrieved 10 July 2023.
- "College instructor put on blast for accusing students of using ChatGPT". NBC News. 18 May 2023. Retrieved 10 July 2023.
- Taloni, Andrea; Scorcia, Vincenzo; Giannaccare, Giuseppe (2023-08-02). "Modern threats in academia: evaluating plagiarism and artificial intelligence detection scores of ChatGPT". Eye: 1–4. doi:10.1038/s41433-023-02678-7. ISSN 1476-5454.
- Wiggers, Kyle (2023-02-16). "Most sites claiming to catch AI-written text fail spectacularly". TechCrunch. Retrieved 2023-10-21.
- "AI Content Checker and Plagiarism Check | GPT-4 | ChatGPT". originality.ai. Retrieved 2023-10-21.
- Glover, John (2023-10-05). "New Tool's Ability to Write Undetectable AI Text Gains Traction". DailyInvestNews. Retrieved 2023-10-21.
- Stojan, Jon. "Undetectable AI Writes Like A Human". USA TODAY. Retrieved 2023-10-21.
- "The Truly Undetectable AI Content Writing Tool". Undetectable AI. Retrieved 2023-10-21.
- https://www.researchgate.net/profile/Andrea-Taloni
- Knibbs, Kate. "Researchers Tested AI Watermarks—and Broke All of Them". Wired. ISSN 1059-1028. Retrieved 2023-10-21.
- Thompson, Stuart A.; Hsu, Tiffany (28 June 2023). "How Easy Is It to Fool A.I.-Detection Tools?". The New York Times. Retrieved 10 July 2023.
- Rizwan, Choudhury (October 15, 2023). "Expert debunks AI tool's claim that Israel's photo is fake". Interesting Engineering. Retrieved October 22, 2023.
- Hsu, Tiffany; Myers, Steven Lee (18 May 2023). "Another Side of the A.I. Boom: Detecting What A.I. Makes". The New York Times. Retrieved 10 July 2023.
- "Identifying AI-generated images with SynthID". www.deepmind.com. Retrieved 2023-10-21.
- Pierce, David (2023-08-29). "Google made a watermark for AI images that you can't edit out". The Verge. Retrieved 2023-10-21.
- Wiggers, Kyle (2023-08-29). "DeepMind partners with Google Cloud to watermark AI-generated images". TechCrunch. Retrieved 2023-10-21.