ChatGPT: OpenAI releases tool to recognize AI texts

0
74

ChatGPT generates texts of such a quality that far-reaching consequences are feared, especially in schools and academics. There is a fear of fraud attempts. The ChatGPT developers from OpenAI now also provide a tool that is supposed to distinguish between AI and human texts.

Dubbed Classifier, the tool is based on a model trained on a dataset containing text on topics written by either an AI or a human. The recognition rate is not yet ideal. In a challenge set of English texts, the tool correctly identified 26 percent of the texts as “presumably written by the AI”, while human-written text was incorrectly classified as AI text 9 percent of the time – that's the false positive rate.

Limitations in automated detection

Therefore, OpenAI also explains that the classifier should not be used as the only tool to identify AI texts. Various limitations are known, which OpenAI describes in the blog post. For example, the classifier is particularly unreliable for texts with fewer than 1,000 characters. In addition, the tool has so far been designed for English texts. It performs significantly worse in other languages ​​and is unreliable in programming code. Texts that deviate greatly from the input data also cause difficulties.

Those who are interested can test the tool at OpenAI as a web app. For this app, the identification system has been adjusted in such a way that the false positive rate is low. If possible, human texts should not be identified as AI texts.

The classifier isn't the first tool to distinguish between AI and human text. GPTZero was already presented at the turn of the year. OpenAI also published a corresponding tool for the older GPT2-LLM years ago.

ChatGPT: far-reaching consequences for text production

With the tool, OpenAI is aimed directly at educators. Shortly after the release of ChatGPT in November, representatives from the academic sector warned that forms of examination such as term papers will practically no longer be carried out in the future. This applies to the social sciences, for example, but ChatGPT has already successfully completed a final exam for a Master of Business Administration (MBA). However, it is difficult to estimate how far-reaching the consequences for the education sector will actually be.

According to OpenAI, the classifier is also aimed at journalists and researchers who want to identify disinformation campaigns, for example. In addition, AI chatbots that pretend to be humans can also be recognized in this way.

However, automatically created texts should also cause a stir in journalism. The first cases are already there. The US tech portal CNET, for example, published a large number of articles that were not clearly generated automatically. Supposedly an editor should have checked them. However, they contained errors in content and sometimes also plagiarism.