A federal judge handed Meta a win in a major copyright case over using books to train AI models. But the decision wasn’t exactly a validation of Meta’s practices. It was a result of the authors failing to argue their case effectively.
Judge Vince Chhabria ruled in Meta’s favor after finding that the authors who sued didn’t present the right arguments or evidence. They claimed Meta’s Llama models let users reproduce their book content and that Meta harmed the market for licensing books to AI companies. Chhabria dismissed both arguments. He said the AI model couldn’t reproduce long excerpts even with aggressive prompting, and that authors don’t have the right to control the entire market for AI training licenses.
The judge made it clear this doesn’t mean Meta’s actions were lawful. Instead, he ruled that the plaintiffs gave him no solid legal basis to deny Meta’s request for summary judgment. He criticized the authors for not making a stronger case about how AI-generated content might flood the market and reduce book sales. That type of argument, he suggested, could have worked.
According to Chhabria, if the authors had shown that Llama could substitute for their work in the market—even indirectly—they might have won. But their filings made only passing references to that idea and lacked supporting data. Without clear evidence of market dilution, the court had little choice but to rule in Meta’s favor.
Chhabria emphasized that the ruling applies only to the 13 authors in this case. It doesn’t create blanket protection for Meta or other AI developers. In fact, he suggested that future plaintiffs could win if they bring better-prepared cases with stronger evidence of harm.
He also offered guidance on what those arguments might look like. One path would be proving that AI models can spit out near-verbatim copies of existing books. Another would focus on the idea that unauthorized use in training blocks or damages a potential licensing market. But Chhabria believes the strongest argument is showing how AI outputs can substitute for human-created work by generating similar content at scale.
The ruling wasn’t just a warning for authors. Chhabria also pushed back on Meta’s argument that losing the case would stall AI development. He called that claim absurd. If AI companies rely on copyrighted material to build billion-dollar products, then compensating creators should be part of the cost. Companies can either pay authors or stick to public domain content. Either way, innovation will survive.
Chhabria took aim at a recent ruling from Judge William Alsup in a related case involving Anthropic. Alsup compared training AI models on books to teaching children how to write, implying that any harm to authors was speculative. Chhabria rejected that analogy. Teaching a student to write is not the same as using books to create a system that can instantly produce endless, competing content. The comparison, he said, misses the point entirely.
The concern isn’t that new technology exists. It’s that it can replace human authors in a way no previous tool ever could. Chhabria warned that courts need to take this risk seriously. Ignoring it just because it hasn’t been litigated before would be a mistake.
For now, Meta escapes further legal trouble in this specific case. But the ruling may serve as a roadmap for future lawsuits. Chhabria made it clear that when authors bring the right evidence—especially around substitution and market harm—they have a strong chance of winning. This case didn’t fail because the harm wasn’t real. It failed because the argument wasn’t built to prove it.