Insights

If you are worried that A.I. will put us all out of the job, the courts haven’t exactly sought to protect those individuals whose works have been, or will be, freely used to train the very A.I. that will potentially put them out of the job one day. 

In order to develop a Generative A.I., tech companies have been scraping the internet for text, code, and/or images that they can use to train the A.I. in pattern recognition. When asked how it learned to write, for instance, ChatGPT says, “I don’t learn in the same way humans do. I was trained on a diverse range of internet text to understand and generate human-like text based on the input I receive. My learning involves patterns and associations in data, allowing me to respond to a wide variety of prompts.”

Feed it enough text, it will learn how to write. Feed it enough images, it will learn how to make images on its own. Feed it enough code, it will can get an A in its Intro to CS class. 

2024 promises to be a big year for generative A.I. litigation. Here are a few of the top trends to watch.

Copyright Infringement

This kind of brute force learning method often involves feeding the A.I. copyrighted material, which has led to a number of lawsuits alleging that these companies are engaging in copyright infringement. Many of these suits, though, have been dismissed, or largely dismissed, on the grounds that training generative A.I. constitutes “fair use.” Judges have found that A.I. creations are “substantively different” from anything made by the claimants, as in the case Andersen v. Stability AI Ltd. Fair use generally permits the use of copyrighted material as long as it is “transformed” in some way. It can even be similar in appearance and the like, as long as it’s being used for a different purpose or if it has a different function than the copyrighted original. In the case, Stability noted that it didn’t so much copy images as apply “mathematical equations and algorithms to capture concepts from the Training Images.” It’s almost as if math functions as a kind of middle man between original work and any alleged derived works, acting as a transformation machine that further distances tech companies from allegations of infringement.

This can go further than you might think. In Kelly v. Arriba Soft Corp., the Ninth Circuit felt that copyrighted visual artwork could even be reproduced if those reproductions increased access to that same copyrighted work. In this case, the reproduced images, because they were being used for the purpose of indexing, and “not for any aesthetic purpose,” were fair use. 

The whole concept of A.I. training seems to be permissible as fair use, at least thus far. In the class actions Kadrey v. Meta Platforms and Chabon v. Meta Platforms, which have been consolidated into a single case, the class of authors alleges that their books were used for training the A.I. models, for which “every output . . . is an infringing derivative work.” This claim was also dismissed. And while it is possibly an accurate claim that the models themselves would not be able to function without their copyrighted inputs, the court couldn’t understand training as a kind of “recasting or adaptation of any of the plaintiffs’ books.” To be fair, we don’t typically hold training to this standard when it’s people training other people. If we did, then every undergraduate professor would have a claim on whatever their students go on to do. Maybe your parents would have a claim on your net worth, or the former boss who trained you a claim against your future earnings. 

Copyright Protection

On the other hand, the US Copyright Office seems unwilling to grant a copyright to A.I. created works. In a recent case, when it was discovered that a comic book author made her images using a generative A.I. tool, the Copyright Office rescinded her registration, stating “Courts interpreting the phrase ‘works of authorship’ have uniformly limited it to the creations of human authors.” While it granted copyright protection for the text and arrangement of text and image, it expressly did not grant it for individual images created by prompt input by the author into the generative A.I. In a similar case, a prompt-inputter sued the Copyright Office only for the courts to come to a similar conclusion. The creator (what do you call these people?) is appealing the court’s decision. 

Standing 

Fair use is not the only way that judges have moved to dismiss claims against generative A.I. models. Given that billions of texts, lines of code, or image files were used in the training process, claimants have been unable to establish standing, the legal concept that a claimant must have sufficient connection to and be harmed by actions of the defendant. In Doe v. Github, Inc., an anonymous group of coders alleges that their code was used to train Github’s Copilot, a an A.I. that is capable of outputting code in response to prompts. The court dismissed the claim of injury due to copyright infringement on the grounds that “an increased risk of future harm alone is not sufficiently concrete to confer standing for damages.” As long as Copilot does not directly reproduce its training code, and there are assurances that it has been designed not to, then the coders who unintentionally provided it with training material are not sufficiently connected to its output to be able to make a claim against it. 

Many of these cases are still ongoing, and while many claims have been dismissed, they haven’t been entirely dismissed. In fact, in the case Thomson Reuters v. Ross Intelligence, where Ross Intelligence is alleged to have infringed on the copyrighted material on Westlaw to train its own legal tool, the judge decided against ruling on fair use claims, thinking it best that a jury decide the matter.

Antitrust

The FTC is looking to make sure that A.I. plays fair–at least with respect to other A.I. companies. In a recent press release, the FTC urged A.I. companies to refrain from anticompetitive business practices that would prevent competition. The FTC further cited how its pressure on the semiconductor giant Nvidia prevented it from going through with its deal to acquire the computer processor company Arm. Nvidia has, one could argue, a near-monopoly on high end chips needed for large language models and A.I. training. If the FTC cannot go after A.I. directly, maybe it can prevent its chip suppliers from getting too far ahead of the competition. 

For more insights on legal trends and more, subscribe to RapidFunds on LinkedIn. RapidFunds has been providing settlement funding for almost 20 years. We’ve completed over 4,000 transactions and have helped thousands of firms with funding. Stop waiting for your legal fees and contact RapidFunds today

Let’s Talk

Please complete the form below and we will be in contact with you as soon as possible.

  • This field is for validation purposes and should be left unchanged.