The Potential and Risks of AI-Generated Text

As more AI-powered creations come to life, they reveal new benefits to businesses and the need to better understand AI’s impact on societies.

By Julian Smith

By Julian Smith February 1, 2023

Language may be the most complex behavior humans have ever come up with. Teaching machines how to understand and use it has always been one of the biggest challenges in the field of artificial intelligence (AI). After decades of sluggish progress, recent innovations in machine learning have helped push the process forward. Advanced algorithms can now generate (mostly) coherent text in English and other languages, with astonishing – and sometimes concerning – results.

A language is far more than a dictionary and grammar guide. Words can be combined in infinite, subtle and often ambiguous ways, which means that teaching a computer a list of linguistic rules isn’t enough. In 2019, the research lab OpenAI released GPT-2, a “large language model” that scrapes vast swaths of words from the internet. Almost like a growthing child, it soaks in text and learns to predict patterns of words and phrases.

The process costs tens of millions of dollars and requires hundreds of parallel processors. Its successor GPT-3, launched in 2020 and trained on about 200 billion words, can create paragraphs of text on a wide range of subjects that can be hard to distinguish from stories written by (ahem) people.

OpenAI offers commercial access to the algorithm to researchers and startups through an application programming interface (API). In 2023, on the heels of tremendous media buzz around OpenAI’s ChatGPT, Microsoft made another big bet on OpenAI by investing $10 Billion. OpenAI models are deployed in Microsoft’s Azure public cloud service and power category-defining AI products like GitHub Copilot, DALL·E 2 and ChatGPT, according to a Microsoft blog post.

Related

AI Helps Musicians and Scientists Finish Beethoven’s 10th Symphony

Most current language models are focused on English text, including Gopher from DeepMind; GPT-NeoX-20b from EleutherAI, a grassroots collective of AI researchers; and Jurassic-1, from the Israeli AI21 Labs. Last year alone, Huawei recently announced a GPT-like Chinese language model called PanGu-alpha, and the South Korean search company Naver introduced a Korean language model called HyperCLOVA.

“Any of these models are capable of producing text that seems perfectly realistic, though they will generally be more believable at certain tasks than others,” says Micah Musser of Georgetown University’s Center for Security and Emerging Technology (CSET).

AI is becoming more conversational, wrote The New York Times in late January 2023, reporting on a website called Character.AI, which allows people to “chat with a facsimile of almost anyone, live or dead, real or imagined.” 

How AI is Moving into the Business World

While these new creations raise concerns about the negative impact they can have on societies, increasingly they show what’s possible with AI, including how it can help businesses innovate and remain competitive. 

Language AIs are already used by companies like Facebook, Google and Microsoft in a variety of ways, including language translation, intelligent email assistants, improving search results, and generating marketing copy and computer code. Readily available resources from cloud computing make it easier for businesses big and small from across different industries to put AI to use. A Deloitte study found that 70% of companies get their AI capabilities through cloud-based software while 65% create AI applications using cloud services.

Related

The Role of AI in Cloud Computing

Infact, many new products and services used by IT departments offer AI-powered automation to make managing data systems easier. And more AI-powered and AI-facilitatinig products and services hit the market every year. Evidence all around points to business ramping up their use experimentation with and impliimenation of AI into their operations.

How Bad Actors Misuse AI Text Generators

It’s imperative to understand the dark side of having computer code create endless reams of convincing paragraphs. Experts often point out that AI language systems are only as good as the text they are trained on – and the internet is full of cultural biases, lies and hate speech. Like facial recognition systems that discriminate against certain races, language AIs can echo the mistakes and cultural biases of their training data.

“It is a thrill to see her learn like this,” said Jeremy Howard, an artificial intelligence researcher, in a New York Times interview after introducing ChatGPT to his 7-year-old daughter. He saw it as a kind of tutor that answered his daughter’s questions.

“But I also told her: Don’t trust everything it gives you. It can make mistakes.” 

Related

6 IT Trends That Will Shape 2023

With the barrier to entry steadily shrinking, concerns are growing about the many ways the technology could be deliberately misused. Spam and fake product reviews are popping up. The Georgetown University report explained how conspiracy theorists and extremist groups could harness language AI to churn out fake news and hate speech.

“AI language models could in theory be used to create misinformation on any topic,” Musser said. 

Many users of Twitter, Facebook and other social media services claim this has been happening for years, giving rise to loud, on-going social and politcal debates about the need for regulation or protective measures.  

How AI Can Generate Misinformation

Short snippets of text could be the most convincing kind of misinformation for a language model to create. That makes Twitter an ideal environment for automated deception. With a few examples on a given subject, a model could potentially spit out hundreds of similar tweets.

Longer text can be more of a challenge. Once a language model has been trained, its understanding of the world is somewhat fixed, like a talking parrot that goes deaf. But providing a model with just a bit of context about an evolving topic can enable it to produce perfectly believable content.

Related

AI Brings Deeper Understanding of Baseball Players’ Impact on Game

“A large language model can sometimes but not reliably take a description of a breaking news story and ‘spin’ it to better align with a pre-specified narrative,” Musser said.

The hardest task for a language AI is holding a sustained back-and-forth argument with a real human without losing the thread of the conversation. Language models aren’t there yet, Musser says, but that doesn’t mean they won’t be someday.

Has any of this happened already? 

“The simple answer is that we don’t know,” Musser said, “because it’s extremely hard – possibly impossible – to reliably detect the outputs of large language models.” 

He compared the challenge to finding alterations in digital images. Millions of pixels offer a large dataset to detect the telltale statistical patterns of machine generation. In contrast, it’s much more difficult to find such patterns in a single tweet or a Facebook post of a few hundred words – even using AI language models.

How to Identify AI-Generated Text

The methods used to detect if text is AI-generated tend to get worse as the models get larger and more realistic, Musser said. CSET is currently working with OpenAI and the Stanford Internet Observatory to examine possible solutions to combat disinformation created by large language models.

A more promising route could be using AI to label content based on the actual false claims it makes, instead of its source. This would largely be the domain of social media companies, whose track record in fighting misinformation is spotty at best. And even that isn’t a perfect solution, Musser explained. 

“I’m unclear, as I think everyone is, on how effective things like automatic labeling of misinformation content actually are in reducing people's willingness to believe it,” he said.

As more new use cases for AI emerge so, too, will broader awareness and deeper understanding about potential and risks. There’s a common refrain these days illustrates the current state of AI: “We’re building the plane while flying it.”

Julian Smith is a contributing writer. He is the executive editor Atellan Media and author of Aloha Rodeo and Smokejumper published by HarperCollins. He writes about green tech, sustainability, adventure, culture and history. 

© 2023 Nutanix, Inc. All rights reserved. For additional legal information, please go here.

Related Articles