Hungarian Dolphins: Europe’s First Landmark AI Copyright Case
A Hungarian publisher, Like Company, asked Google’s AI chatbot Gemini to summarize its article about Hungarian singer Kozsó’s dream of bringing sweetwater dolphins from the Amazon to Lake Balaton. The chatbot complied. But Like did not give Google express permission to use the article, and it sued.
The case has been referred to the Luxembourg-based European Union Court of Justice, where a hearing is scheduled for March 10. While the case surely will not be the final word on the battle between AI developers and publishers, the verdict will represent the first major judgment clarifying how European copyright rules address AI.
In the US, courts have already issued two consequential rulings involving Meta and Anthropic, exonerating the tech companies from infringing on copyrighted works. The US allows “fair use,” which permits unlicensed use of snippets of copyrighted material for commentary, criticism, or research on a case-by-case basis.
Europe’s Copyright Directive offers copyright holders additional protections. While it allows for commercial text and data mining in some instances, rights-holders can opt out. The bloc’s new AI Actexpanded these opt-out possibilities to the training of chatbots. But legal questions remain about the limits of what constitutes copyright infringement, both in the training and deployment of AI models.
Navigating the copyright crunch, balancing creator rights with technological progress, is crucial to Europe’s AI hopes. Many European officials fear that the continent is falling behind on the new technology and have considered delaying some strict “high-risk” AI rules until 2027 to avoid stifling growth. While the European Parliament is exploring whether to force AI developers to license copyrighted content, European Commission Executive Vice-President Henna Virkkunnen recently insisted that AI training benefits from the text and data mining exemption in both the Copyright Directive and AI Act.
It’s now up to Europe’s highest court to clarify. Judges will have to decide whether training a generative AI model falls under the commercial “text and data mining” exception of Europe’s Copyright Directive and AI Act or whether publishers should receive money for training.
The case centers on a few questions. Did Google use the article’s content to train its AI model, and did the result generate “unauthorized communication to the public”?
According to Like publishers, Google used its copyrighted material to train Gemini beyond what is allowed under the text data mining exemption. It then proceeded to disseminate this content to a new public without authorization.
Google disagrees. Even if it summarized the Like article, Google says that doesn’t constitute a communication to a “new public.” What’s more, it argues that the chatbot’s response didn’t reproduce any parts of the article beyond some central facts. It also insists that Gemini wasn’t trained on the article to begin with, beyond what is allowed under the text and data mining exemption.
Although the court’s decision could have far-reaching implications, some analysts doubt whether this case is the right one to set this precedent. It is “highly improbable” that Gemini provided this summary because it was trained on this copyrighted article, says Paul Keller from the Institute for Information Law, but that is much more likely that the fully trained chatbot retrieved the article from the web in real time and summarized it. This assumption is due to the short time frame between the article’s publication and the prompt.
If true, this shifts the case away from one about how AI chatbots are trained — though that is no doubt an important question — toward the second question about whether the reproduction of a copyrighted text by a chatbot constitutes an “unauthorized communication to the public.”
Here, too, experts question whether the summary that the chatbot provided constitutes such a communication. Dr. Andres Guadamuz of the University of Sussex argues that a summary of an article that isn’t behind a paywall can hardly be seen as communication to a new public, as everyone on the Internet can access it. What is more, the output is mostly private and only communicated to the user in question.
Setting a precedent for new technologies is always tricky, as the consequences are far-reaching, and no case has perfect conditions that would allow courts to set clear rules for the future. The Like case will not clarify all of the ambiguities and unknowns about European copyright and AI. But it could begin to offer some much-needed answers.
“A wrong ruling could leave Europe with the worst of all worlds,” argues Martin Kremtscher from the Centre for Regulation of the Creative Economy, complex regimes for AI developers and no clear protections for rights holders.
Clara Riedenstein is a tech policy analyst and writer. Named the 2026 Rising Expert in Tech Policy by the Young Professionals in Foreign Policy, her work examines how emerging technologies shape existing political, legal, and social institutions. Clara holds an MSc in Political Theory Research from Oxford University, where she studied as a C. Douglas Dillon Scholar and focused on the implications of large language models for theories of state and jurisdiction.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation
The post Hungarian Dolphins: Europe’s First Landmark AI Copyright Case appeared first on CEPA.