Key Highlights
- Students are using ChatGPT for essays
- OpenAI is exploring text watermarking, metadata to detect AI-generated text
- Effective detection tools are still in development
The rise of AI-generated tools like ChatGPT has revolutionized a number of sectors, including education. Students frequently use these resources, which are well-known for being user-friendly and effective, to produce essays and research papers. However, this widespread use has brought new challenges, especially with regard to cheating and forgery.
Also Read | Meta AI vs ChatGPT: Which Chatbot Fits Your Needs?
ChatGPT And Academic Honesty
Reports have highlighted growing concerns about AI-assisted cheating. The Wall Street Journal recently reported that students are using ChatGPT to create course materials. This raised concerns about the integrity of academic work. In this regard, tools that can recognize AI-generated content have been demanded.
OpenAI’s Response To Cheating Concerns
As a reaction to these developments, OpenAI is creating a tool that can analyze content created by artificial intelligence. The Wall Street Journal claims that internal debates have caused a delay in the tool’s distribution, even though it has been available for about a year.
Text Watermarking
One of the key techniques that OpenAI is exploring to address this issue is text watermarking. This approach requires the addition of a different model to understand the origin of the text. While effective for minor changes such as paraphrasing, textual watermarking is less strong for more complex methods. For example, a watermark can be easily avoided by using a new generative AI model to rewrite its content or to insert and then remove specific characters.
“While it has been highly accurate and even effective against localized tampering, such as paraphrasing, it is less robust against globalized tampering; like using translation systems, rewording with another generative model, or asking the model to insert a special character in between every word and then deleting that character – making it trivial to circumvent by bad actors,” OpenAI mentioned in its blog post.
Text Metadata Potential
In addition to text watermarks, OpenAI is also exploring metadata for use in identifying AI-generated content. Still in its start, the technique seems more promising than watermarking. Metadata is cryptographically underwritten, reducing the risk of false positives. Unlike watermarks, which can generate a high number of false positives when applied to large texts, metadata provides a more reliable way to trace the origin of the text.
“For example, unlike watermarking, metadata is cryptographically signed, which means that there are no false positives…While text watermarking has a low false positive rate, applying it to large volumes of text would lead to a large number of total false positives,” OpenAI added.
Also Read | No Sign-Up Needed: OpenAI Enables Account-Free ChatGPT Access
Current Limitations
Despite the promising developments, these tools are not yet available. This means there is no possible way to detect if the content submitted by the students has been written by AI. The ongoing research and development by OpenAI aim to fill this gap. This will ensure academic integrity in the age of AI.
By continuing to explore new solutions such as text watermarking and metadata, OpenAI hopes to reduce the risks associated with AI-assisted cheating and support academic integrity standards.
For the tech geeks, stay updated with the latest cutting-edge gadgets in the market, exclusive tech updates, gadget reviews, and more right on your phone’s screen. Join Giznext’s WhatsApp channel and receive the industry-first tech updates.