AI has become advanced over the years. AI algorithms can now make texts that can fool people, which can potentially provide a way to mass-produce fake news, bogus reviews, and even fake social accounts. Fortunately, AI can also be used to identify fake text.
Researchers from Harvard University and the MIT-IBM Watson AI Lab have developed a new tool for spotting text that has been generated using AI. Called the Giant Language Model Test Room (GLTR), it exploits the fact that AI text generators rely on statistical patterns in text, as opposed to the actual meaning of words and sentences. In other words, the tool can tell if the words you’re reading seem too predictable to have been written by a human hand.
GLTR highlights words that are statistically likely to appear after the preceding word in the text. As shown in the passage above (from Infinite Jest), the most predictable words are green; less predictable are yellow and red; and least predictable are purple. When tested on snippets of text written by OpenAI’s algorithm, it finds a lot of predictability. Genuine news articles and scientific abstracts contain more surprises.
You can try it out for yourself via this link.
Know more about the research over at Technology Review.
(Image Credit: Technology Review)