This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.

There's A New AI Tool That Can Spot Text Written By AI, Even When Humans Are Fooled

As computers are being used to create text, photos and even fake videos, new developments in AI could help spot the fakes that humans can’t detect.
Screenshot/ GLTR

AI algorithms are already being used to write text that could pass as something a human wrote—this can be misused to create fake news, or (for example) attack a business competitor by leaving a mass of realistic sounding reviews online on sites such as Zomato. Now, though, a new AI tool that can spot AI-generated text has been developed by researchers from Harvard working with the MIT-IBM Watson AI lab, reports Technology Review.

The researchers created a tool called the the Giant Language Model Test Room (GLTR), which looks at statistical patterns in text to determine if it was composed by a human or a machine. According to the creators, GLTR applies a suite of baseline statistical methods that can detect generation artefacts across common sampling schemes.

For the latest news and more, follow HuffPost India on Twitter, Facebook, and subscribe to our newsletter.

“In a human-subjects study, we show that the annotation scheme provided by GLTR improves the human detection-rate of fake text from 54% to 72% without any prior training. GLTR is open-source and publicly deployed, and has already been widely used to detect generated outputs,” the researchers wrote.

In countries like India, we’ve seen platforms such as WhatsApp being heavily targeted by political parties, who also put together huge teams of people to post on Facebook and Twitter. Automating the process with AI can enable scaling up such operations even further, raising the reach of disinformation and propaganda to completely new levels.

As it turns out though, the researchers found that the AI algorithms that can be used to generate text bring in a lot of predictability. In contrast, genuine news and actual scientific abstracts were less statistically predictable, which forms the basis of the GLTR model, which still relies on humans to study the output, while the tool simply highlighted predictable snippets of text. “Our goal is to create human and AI collaboration systems,” said Sebastian Gehrmann, a PhD student involved in the work.

Text generation is only one way in which AI is being used to fool human observers though. Today, it’s become easier than ever to fake a photograph using AI, and one developer also created a computer program which would process photos of women (and only women, not men) to digitally remove their clothes, in just seconds. Although that app was shut down, fears of digitally manipulated images remain.

Equally controversial are ‘Deepfake’ videos, where AI can be used to create realistic morphed videos. This has, again, been misused to make celebrity nude clips as well as to generate fake videos of political leaders saying things they didn’t actually say.

A new deepfake detection algorithm has been developed which can spot these videos, and new digital forensics techniques keep evolving, but as the sophistication of tools for fakes also increases, it’s going to become harder and harder to know what’s real.

Close
This article exists as part of the online archive for HuffPost India, which closed in 2020. Some features are no longer enabled. If you have questions or concerns about this article, please contact indiasupport@huffpost.com.