BUSINESS NEWS
AI vs. Reality: Getty Images v Stability AI
Getty Images, a major stock-photo and licensing company, sued Stability AI, the maker of the generative-image model Stable Diffusion in the UK in January 2023. The reason for the lawsuit was the allegation that Stability AI used millions of Getty’s copyrighted images, and their watermarks, without permission, to train Stable Diffusion. The claims included primary copyright infringement (unauthorised copying in training and output) which was later dropped largely due to the training occurring outside of UK jurisdiction, secondary copyright infringement (via distribution/import of an “infringing article”), trademark infringement, “passing off”, and database rights. On 4 November 2025, the UK High Court held that: ⚖ The AI model (Stable Diffusion) does not count as an “infringing copy” under UK law because, although trained on Getty’s works, it never stored or reproduced them in a way that matches the legal definition of an infringing copy. ⚖ Some aspects of Getty’s trademark claims, specifically concerning reproduction of the Getty watermark, found favour, though copyright claims largely failed. It’s one of the first legal tests in the UK of how generative-AI training and output practices intersect with copyright, trademark and database rights. Creative groups hoping to secure a legal breakthrough have been left disappointed. The outcome suggests current UK copyright law may not clearly cover AI models trained at scale on copyrighted works, especially where training happens abroad.
James Clark, Data Protection, AI and Digital Regulation partner at law firm Spencer West LLP commented, “At the heart of the Getty Images judgment is the finding that the training of Stable Diffusion's AI model using copyright work did not result in the production of an infringing copy of that work. At the end of the training process, the AI model did not store any copy of the protected works, and the model itself was not itself an infringing copy of such work. It this finding that will cause concern for the creative industry whilst giving encouragement to AI developers. “The judgment usefully highlights the problem that the creative industry has in bringing a successful copyright infringement claim in relation to the training of large language models. During the training process, the model is not making a copy of the work used to train it, and it does not reproduce that work when prompted for an output by its user. “Rather, the model ‘learns’ from the work, in a similar way to the way that you or I might do so. As an expert report quoted in the judgment explains: “Rather than storing their training data, diffusion models learn the statistics of patterns which are associated with certain concepts found in the text labels applied to their training data, i.e. they learn a probability distribution associated with certain concepts.” n
The judgment usefully highlights the problem that the creative industry has in bringing a successful copyright infringement claim in relation to the training of large language models.
12
Powered by FlippingBook