Solving Generative AI’s IP Problem

The growing usage of generative AI raises new legal concerns, especially around intellectual property rights, so ironically humans lean on technology for answers.

By Jennifer Goforth Gregory

By Jennifer Goforth Gregory October 6, 2023

In Jason Allen’s digital artwork titled “Théâtre D’opéra Spatial,” several robed figures stand on a stage looking out at a vague cityscape through a large, glowing aperture. From the reflection of the light on the architecture to the intricate textures on the Victorian dresses, the image is nothing short of stunning.

Judges at the Colorado State Fair agreed: They awarded Allen’s entry a coveted blue ribbon for the 80 hours he invested in the masterpiece.

Then Allen revealed on social media that he used the generative AI platform Midjourney to help create his entry. While some people applauded his use of technology, others called him a cheater. The artist defended his work by telling Colorado newspaper The Pueblo Chieftain that he wanted to make a statement using artificial intelligence artwork. Having accomplished his objective, he refused to apologize.

The debate is as heated as it is new. And it’s not just about visual art. It’s also about music, photography, graphic design, literature and journalism — virtually anything that’s creative.

Related

Who Owns AI?: The Rise of Artificial Intelligence Patent Law

Regardless of the medium, the generative AI legal issues that are emerging now will continue to raise important questions about generative AI and intellectual property rights: 

Who is the “author” of an AI-generated text

Who is the “photographer” behind an AI-generated image? 

Who is the “composer” of an AI-generated song? 

And do traditional copyright and trademark rules apply to AI-generated content?

If technology created this new IP problem, then technology might be the best way to solve it.

The Generative AI and IP Dilemma

Whether generative AI is an enabler of creativity or a threat to it is a matter of personal opinion. One thing that almost everyone can agree on, however, is that generative AI will be transformative. So much so that it could generate up to $4.4 trillion per year in value across 63 different compelling use cases, including healthcare, McKinsey & Company reported in June 2023.

“In early 2023, the publicly released versions of generative AI tools really took a quantum leap forward and made it possible … to create outputs that were much more human-like than things that we had seen before,” explained intellectual property litigator Chris Mammen, a partner at transatlantic law firm Womble Bond Dickinson. He said the increase in quality and availability made generative AI more accessible to business users. That, in turn, created IP concerns that can’t be ignored.

Generative AI will change the nature of content creation, enabling many to do what, until now, only a few had the skills or advanced technology to accomplish at high speed, according to Gil Appel, Juliana Neelbauer, and David A. Schweidel.

“As this burgeoning technology develops, users must respect the rights of those who have enabled its creation—those very content creators who may be displaced by it,” they wrote in an April 2023 article for the Harvard Business Review.

Related

The Amalgamation of AI and Hybrid Cloud

There are two key issues, in particular, that companies must address in order to maximize the return on their generative AI investment: visibility and disclosure.

By design, generative AI mines existing content for information, using what it gleans from it as inputs with which to create new outputs. Therefore, the burning question is: Where did the content come from that the generative AI ingested in order to create a piece of writing, an image or a song?

What is an ethical quandary for individuals who use generative AI for personal purposes could be a major liability for corporations that use it for professional purposes. The stakes become especially high, for example, when organizations use generative AI to create products or services that they might profit from.

Intellectual property litigator Chris Mammen, a partner at transatlantic law firm Womble Bond Dickinson, describes the discussion about generative AI and IP as a fork in the road. Ultimately, companies will have to look to the courts to determine in which direction they should go.

“Is it fair use for the AI platforms to take whatever they can find on the internet and use that as training data?” Mammen asks. “Or do they have to get permission or license to use copyright protected material before they ingest it as training data?”

These are ethical and legal issues in need of resolution.

Empowering Provenance

Clearly, there are legal remedies and policy solutions that can and must be considered. In the meantime, however, technology can be an invaluable tool for organizations that are trying to navigate thorny questions about generative AI and IP.

One technology that’s particularly promising, for example, is blockchain, which can give organizations that are using generative AI visibility into the content creation process as well as a historical record of data provenance, which may be needed if the content becomes the center of a legal issue.

Among those pairing blockchain with generative AI are news outlets like Reuters and Rolling Stone, both of which have turned to Numbers Protocol for a transparent and verifiable ledger of content creation that helps protect intellectual property. Numbers registers the content on a generative AI blockchain, creating a permanent record that documents both the creation process and ownership.

Related

Blockchain is Key to the Future of Healthcare

A similar but different effort is the Adobe Content Authenticity Initiative (CAI), which uses cryptographic methods to store data either in files or on the cloud. As a result, users can view the provenance for a piece of content from within it. By using blockchain technology, CAI creates a tamper-proof ledger that shows the entire history of the content, including how it was created and any edits that were made to it.

“With the proliferation of digital content, people want to know the content they’re seeing is authentic,” Dana Rao, Adobe’s executive vice president and general counsel, said in a 2019 news release — three years before the public release of ChatGPT. “While this is a formidable challenge, we are thrilled to be championing the adoption of an industry-wide content attribution system … It is critical for technology and media companies to come together now in order to empower consumers to better evaluate and understand content online.”

Zero-trust security provides another option for securely creating and gaining visibility into generative AI and cloud. Cloudflare products, for example, give organizations a suite of tools to handle many of the issues related to AI content and IP concerns. By using service tokens, the tools provide a log of programming interface requests and services that are able to access AI training data. Organizations can revoke the tokens at any time.

“AI holds incredible promise, but without proper guardrails it can create significant risks for businesses. It is far too easy, by default, to upload sensitive internal or customer data to AI tools. Once the data is used for training AI, it is virtually impossible to get it out,” Matthew Prince, co-founder and CEO of Cloudflare, said in a May 2023 news release

“If you were going to let a class of university students rummage around in your internal data, you’d of course put clear rules in place on what data they can access and how it can be used in their education. Cloudflare’s zero-trust products … provide the guard rails for AI tools, so businesses can take advantage of the opportunity AI unlocks while ensuring only the data you want to expose gets shared.”

In Praise of Internal Data

Having visibility into data provenance is one way to mitigate the IP risks inherent in content that’s made with generative AI. For many organizations, however, an even better way to mitigate the risks is to avoid them altogether.

Perhaps the best way to do so is to use only company-owned data for generative AI creation. For example, companies can use the generative AI platform Writer to create written content based exclusively on internal data that has been validated and vetted. 

Related

The Future of AI Computing Resides at the Edge

When organizations use Writer, they get access to a generative AI tool that has been customized for them with respect to tone, brand guidelines, etc. Employees can access the technology wherever they need it — on an internet browser, inside a Microsoft Word document or while using interface design tool Figma.

According to Forbes, the biggest difference between Writer and other similar products is that Writer uses proprietary technology instead of putting a user interface layer on existing technology. Writer significantly improves accuracy by having encoders and decoders — technology that understands the text and technology that predicts the correct text to use, respectively — speak to each other directly.

Protecting Your Intellectual Property

In addition to reducing IP issues with content created using generative AI, organizations are using technology to protect their original content creation. An easy answer is to withhold your data from generative AI tools that want to ingest it.

Again, technology can help. For example, consider a tool such as Glaze, which is designed to help artists protect their original content from generative AI platforms. It works by making very subtle changes to one’s creative work, which makes it more challenging for AI to mimic it. Because AI often picks up subtleties in an artist’s work, Glaze keeps the artist’s signature traits from being used for AI training data.

Related

Artificial Intelligence is Safeguarding Society One Network at a Time

“Artists really need this tool; the emotional impact and financial impact of this technology on them is really quite real,” Glaze co-creator Ben Zhao, a computer science professor at the University of Chicago, told the University of Chicago Department of Computer Science in February 2023.

What’s Next for Generative AI?

Although it’s hard to predict what the positive and negative consequences of generative AI will ultimately be, Mammen says the toothpaste is out of the tube—and can’t possibly be put back in again. The question isn’t whether to use generative AI, therefore. Rather, it’s how?

“AI will transform major parts of our economy and our society,” Mammen said. “The resolution of the questions pending could have a very big impact on both the quality and advance of AI platforms as well as a big impact on the economics and the economic distribution of the benefits of this tool.”

We are at the beginning of the generative AI story. However, organizations that wait for legal questions to be answered in a definitive way will only find themselves sinking further in the quicksand. By taking steps now and using the available technology to protect themselves from IP issues, organizations can evolve as the answers become clear — without becoming a main character in the conflict.

Editor’s note: Learn more about Nutanix GPT-in-a-Box, a full-stack software-defined AI-ready platform designed to simplify and jump-start your initiatives from edge to core. More details in this blog post The AI-Ready Stack: Nutanix Simplifies Your AI Innovation Learning Curve and in the Nutanix Bible.

Jennifer Goforth Gregory is a contributing writer. Find her on X @byJenGregory.

© 2023 Nutanix, Inc. All rights reserved. For additional legal information, please go here.

Related Articles