AI Lawsuits: Let the games begin!

Share This Post

I knew it was just a matter of time before this happened. As a publisher, I focus heavily on what’s legal and ethical. AI, artificial intelligence, is a powerful tool, and not one I recommend not using. However, the legal and ethical concerns are numerous, and it was just a matter of time before lawsuits would happen.

And that time is now.

As reported by the Los Angeles Times[1]Sarah Silverman and two bestselling novelists have sued Meta and OpenAI, the tech startup behind ChatGPT, accusing the companies of using the authors’ copyrighted books without their consent to “train” their artificial intelligence software programs. The proposed class-action lawsuit was filed in a San Francisco federal court on Friday by authors Richard Kadrey, known for his supernatural horror series “Sandman Slim,” and Christopher Golden, along with Silverman, who, aside from acting, published the bestselling memoir “The Bedwetter” in 2010. Each suit seeks just under $1 billion in damages, according to court filings. The authors alleged the two tech companies had “ingested” text from their books into generative AI software, known as large language models, and failed to give them credit or compensation.

But these three are not the only ones filing a lawsuit. As reported by the Guardian[2], best selling authors Mona Awad and Paul Tremblay filed their class action lawsuit against OpenAI, claiming that the organization breached copyright law by “training” its model on novels without the permission of the authors. Awad and Tremblay believe their books, which are copyrighted, were unlawfully “ingested” and “used to train” ChatGPT because the chatbot generated “very accurate summaries” of the novels, according to the complaint. Sample summaries are included in the lawsuit as exhibits.

This is the first lawsuit against ChatGPT that concerns copyright, according to Andres Guadamuz, a reader in intellectual property law at the University of Sussex. The lawsuit will explore the uncertain “borders of the legality” of actions within the generative AI space, he adds.

Books are ideal for training large language models because they tend to contain “high-quality, well-edited, long-form prose,” said the authors’ lawyers, Joseph Saveri and Matthew Butterick, in an email to the Guardian. “It’s the gold standard of idea storage for our species.”

The complaint said that OpenAI “unfairly” profits from “stolen writing and ideas” and calls for monetary damages on behalf of all US-based authors whose works were allegedly used to train ChatGPT. Though authors with copyrighted works have “great legal protection”, said Saveri and Butterick, they are confronting companies “like OpenAI who behave as if these laws don’t apply to them”.

As the Guardian notes, since ChatGPT was launched in November 2022, the publishing industry has been in discussion over how to protect authors from the potential harms of AI technology. I have been part of these discussions, and I can tell you that there is a lot of work that needs to be done. Last month, The Society of Authors (SoA) published a list of“practical steps for members” to “safeguard” themselves and their work. Yesterday, the SoA’s chief executive, Nicola Solomon told the trade magazine the Bookseller that the organization was “very pleased” to see authors suing OpenAI, having “long been concerned” about the “wholesale copying” of authors’ work to train large language models.

Richard Combes, head of rights and licensing at the Authors’ Licensing and Collecting Society (ALCS), said that current regulation around AI is “fragmented, inconsistent across different jurisdictions and struggling to keep pace with technological developments”. He encouraged policymakers to consult principles that the ALCS has drawn up, which “protect the true value that human authorship brings to our lives and, notably in the case of the UK, our economy and international identity”.

Saveri and Butterick believe that AI will eventually resemble “what happened with digital music and TV and movies” and comply with copyright law. “They will be based on licensed data, with the sources disclosed.”

The lawyers also noted it is “ironic” that “so-called ‘artificial intelligence’” tools rely on data made by humans. “Their systems depend entirely on human creativity. If they bankrupt human creators, they will soon bankrupt themselves.” This author and publisher would agree, but I would love to hear your thoughts. As always, feel free to comment below.

-Alesha Brown, CEO

The Profitable Author Association™

Fruition Publishing Concierge Services®



More To Explore


AI Lawsuits: Let the games begin!

I knew it was just a matter of time before this happened. As a publisher, I focus heavily on what’s legal and ethical. AI, artificial