Athira Sethu
Kochi, 25 June 2025
A San Francisco judge has stated that the artificial intelligence firm Anthropic did not violate the law when it employed certain books to train its AI model, Claude. The judge, William Alsup, explained that it was permitted under “fair use” – an exception in U.S. copyright law that occasionally allows individuals to use copyrighted material without seeking permission.
The authors had sued the company, stating Anthropic used their books without permission or pay. The authors included Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson.
The judge concurred that Anthropic’s AI training employed the books in a novel and innovative manner. He explained that it is similar to how a reader learns from books so that they could write something different later. Therefore, employing the books to instruct the AI on how to write was lawful.
But the judge also stated that Anthropic did something improper. It stored more than 7 million pirated (illegally copied) books in a digital “library.” The judge stated this was not a fair use and potentially a copyright infringement. Now, a trial will take place in December to determine whether or not Anthropic needs to pay money to the authors. If the court determines the company did it on purpose, it would owe up to $150,000 per book.
Anthropic is backed by large tech companies such as Amazon and Alphabet, the parent company of Google. It uses the books in a useful and innovative manner, it stated it was relieved that the court recognized this.
This is among several lawsuits against AI firms such as OpenAI, Meta, and Microsoft. Several authors and news publishers are angry because their work is being utilized by companies without authorization.
The firms contend that their AI is learning from these books the way a human would — not plagiarizing, but producing new concepts. They claim that mandating them to compensate each copyright holder would impede the innovation of AI.
Nevertheless, the judge indicated that pirated copies downloaded from the internet, rather than purchasing or acquiring legitimate copies, might not be acceptable. Even when the end use is fair, how one originally gets the books does count.
The result of this case might impact numerous other lawsuits regarding how AI is trained in the future.