ARTICLE
7 December 2023

Use Of Personal Information In Training AI Models Should Be The Same As Use Of Copyrighted Materials

FL
Foley & Lardner
Contributor
Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
There is a lot of hype around providing data subject rights for the use of personal data in training AI models. I have always thought that there really is no need to provide rights to delete...
United States Privacy
To print this article, all you need is to be registered or login on Mondaq.com.

There is a lot of hype around providing data subject rights for the use of personal data in training AI models. I have always thought that there really is no need to provide rights to delete, correct, or even access personal information "in" the trained AI model because the trained AI model, by itself, doesn't actually have any personal information (maybe the subject for a longer blog post). The arguments made by the court in favor of Meta Platform's motion to dismiss seem to support this theory - if the AI model doesn't contain a derivative work of copyrighted material, it stands to reason that the trained large language model (LLM) also doesn't contain personal information (under any modern definition) either.

Of course, time will tell if a court directly addresses the need to provide data subject rights for personal information used to train an AI model itself. For now, all we know is that the U.S. Federal Trade Commission requires that notice be provided for the use of personal information in training AI models.

The plaintiffs allege that the "LLaMA language models are themselves infringing derivative works" because the "models cannot function without the expressive information extracted" from the plaintiffs' books. This is nonsensical. A derivative work is "a work based upon one or more preexisting works" in any "form in which a work may be recast, transformed, or adapted." 17 U.S.C. § 101. There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs' books.

www.courtlistener.com/...

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

ARTICLE
7 December 2023

Use Of Personal Information In Training AI Models Should Be The Same As Use Of Copyrighted Materials

United States Privacy
Contributor
Foley & Lardner LLP looks beyond the law to focus on the constantly evolving demands facing our clients and their industries. With over 1,100 lawyers in 24 offices across the United States, Mexico, Europe and Asia, Foley approaches client service by first understanding our clients’ priorities, objectives and challenges. We work hard to understand our clients’ issues and forge long-term relationships with them to help achieve successful outcomes and solve their legal issues through practical business advice and cutting-edge legal insight. Our clients view us as trusted business advisors because we understand that great legal service is only valuable if it is relevant, practical and beneficial to their businesses.
See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More