Anthropic Settles Copyright Lawsuit Amidst Growing AI Controversey
In a landmark development for the artificial intelligence industry, AI startup Anthropic has reached a settlement with a group of US book authors who accused the company of copyright infringement in training its Claude chatbot.
The agreement, announced on August 26, averts a high-stakes trial scheduled for December and highlights the escalating legal battles over how AI firms source data for their models.
The case highlights broader concerns about intellectual property in the AI era, with experts warning of potential industry upheaval.
The lawsuit, filed in 2024 by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, alleged that Anthropic unlawfully downloaded millions of pirated books from "shadow libraries" like LibGen to train Claude. The plaintiffs claimed this violated their copyrights, as the company used their works without permission or compensation. Anthropic, backed by tech giants Amazon and Alphabet, defended the practice as "fair use," arguing it was transformative and essential for advancing AI technology.
In a pivotal June 2025 ruling, U.S. District Judge William Alsup in San Francisco partially sided with Anthropic, determining that training AI on copyrighted material constituted fair use and did not infringe rights. However, the judge found that Anthropic's act of saving the pirated books to a central library was a separate violation, opening the door to massive damages.
With up to seven million books involved, potential statutory penalties could have reached $150,000 per work - potentially trillions of dollars in a worst-case scenario for willful infringement.
The settlement terms remain undisclosed, pending court approval, with details expected to be filed by September 5. Justin Nelson, the authors' attorney, described it as "historic" and said it "will benefit all class members." Anthropic declined to comment, but the move has surprised legal experts, given the company's strong fair-use defense. Duke University law professor Chris Buccafusco noted, "Given their willingness to settle, you have to imagine the dollar signs are flashing in the eyes of plaintiffs' lawyers around the country." Cornell Law School professor James Grimmelmann added that the settlement's details could serve as a model for other cases, though its unique piracy element sets it apart.
This resolution comes amid a wave of similar lawsuits against AI companies, including OpenAI, Microsoft, and Meta, where courts are grappling with fair use in AI training.
In a related case, music publishers like Universal Music Group are suing Anthropic over the use of copyrighted lyrics, alleging Claude regurgitates them verbatim. The settlement avoids setting a binding precedent, potentially encouraging more negotiations but also emboldening plaintiffs.
In expert commnet Dr. Ilia Kolochenko, CEO of ImmuniWeb and a Fellow at the British Computer Society specializing in AI and data privacy, views the case as symptomatic of deeper industry flaws. "While large AI companies pay millions to lawyers to defend the mushrooming copyright infringement lawsuits, AI fatigue and disillusionment are rapidly growing," he said. Kolochenko warned that "recent revelations about the massive and deliberate exploitation of pirated content for LLM training by largest AI vendors - are just the tip of the iceberg of the unexpectedly nasty and painful surprises, more are looming on the horizon."
He critiqued the current AI business model: "Grab everyone’s intellectual property without paying, claim that you do this for the sustainable innovation and everyone’s well-being, and then make billions for founders and shareholders - may pretty soon become economically unviable." Kolochenko predicted an "AI winter" despite the persistence of useful generative tools, noting they fall short of promised Artificial General Intelligence. He also criticised the EU AI Act for providing "little to no protection to copyright owners," urging them to erect technical barriers against data scraping. "Worse, an avalanche of breach-of-contract lawsuits is visibly coming, but this time, AI companies will likely have to pay," he added.
Adding to Anthropic's challenges, the company revealed on August 27, that it frustrated hacker attempts to misuse Claude for cybercrimes, including drafting phishing emails and malicious code. The firm banned involved accounts and tightened safeguards, underscoring risks of AI in wrongdoing. This incident, detailed in a public report, reflects growing regulatory scrutiny, with the EU's AI Act and U.S. commitments pushing for safety.
As AI evolves, settlements like this may reshape data practices, balancing innovation with creators' rights. With details forthcoming, the industry watches closely for ripples across ongoing litigations.
Reuters | Reuters | Wired | MusicBusinessWorldwide | Deadline | LA Times | pymnts |
Image: Ideogram
You Might Also Read:
British Cyber Code Of Practice For Developing AI:
If you like this website and use the comprehensive 8,000-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible