Are artificial intelligence detectors biased against non-native English speakers?

Artificial intelligence programs that can generate text at the click of a button are becoming prolific, with the most popular among them services like ChatGPT, which boasts hundreds of millions of monthly users. Despite the usefulness of these tools, many humans cannot tell the difference between something written by a person and something written by a machine. As a result, they can fuel disinformation or even facilitate cheating in academia. To combat this rise in AI-generated reading, some “GPT detectors” have been rolled out, which can allegedly spot the difference — but they may come with their own biases.

A recent opinion piece in the journal Patterns calls into question the accuracy of GPT detectors, finding that they’re biased against non-native English speakers. The authors also found that it’s easier to fool these detectors by using more detailed prompts in the first place. “This raises a pivotal question,” the authors write. “If AI-generated content can easily evade detection while human text is frequently misclassified, how effective are these detectors truly?”

Overlooking the biases in GPT detectors “may lead to unintended consequences, such as the marginalization of non-native speakers in evaluative or educational settings,” they conclude. As such, this is yet another stark example of how technology can reflect inherent prejudices in society. The authors caution against using GPT detectors in certain settings, especially educational environments with non-native English speakers, and recommend a thorough evaluation of this technology and its limitations as it becomes more widespread.

Comments

Leave a Reply

Skip to toolbar