Last month, a key procedural ruling was handed down in Getty Images v Stability AI. This dispute, which has gained widespread attention, is among the most significant cases in the UK at the moment relating to copyright and AI.

The Litigation

In January 2023, Getty Images brought legal proceedings in the High Court of Justice against Stability AI, claiming that the company had used millions of its images, videos and illustrations to train and programme its AI model, Stable Diffusion. Getty claims that over half of these works are original copyrighted pieces, and they were 'scraped' from Getty's website without permission. It asserts that Stability AI used these images to train its AI model and then allowed the public to create synthetic images based on this training through use of its AI system. The dispute deals with the issues of copyright infringement, database right infringement, trade mark infringement and passing off.

Stability AI has admitted that some of Getty's images were used in training Stable Diffusion but has not identified exactly which images were included.

The Latest

The judgment from last month does not address the core issue of copyright infringement, but instead focuses on a procedural matter. The First to Fifth Claimants are members of the Getty Images group of companies. The action was also brought by the Sixth Claimant, Thomas M Barwick Inc, in a representative capacity on behalf of a class of persons. The Claimants asserted that the class of persons represented by the Sixth Claimant were those who were owners of the copyright in artistic works and film works which had been exclusively licensed to the First Claimant and the copyright had been infringed by the Defendant. They argued those persons could be identified on the basis that they had entered into an exclusive licence with the First Claimant and the exclusively licensed works included works used to train Stable Diffusion.

The judge held that the class definition was dependent on the outcome of the proceedings – as the question of whether copyright had been infringed could only be determined at trial. There was no way at present to identify the members of the class. As such, the judge could see no basis on which the court could be satisfied that any particular person qualified as a member of the class.

The judge also held that she could not be satisfied that the representative claim would remove the need for an expensive and time-consuming assessment of liability and quantum, or that it would not also create a large case management burden for the court.

Accordingly, the judge refused permission for a representative claim.

While this judgment focused on procedural issues, it raises a crucial point regarding the practical difficulties of identifying copyrighted works used in AI model training. The court acknowledged that the number of images involved made it ''wholly disproportionate and practically impossible'' to identify each infringing image without significant resources.

What's Next?

The first trial (to determine liability) in this case is scheduled to begin in June 2025. As the case progresses, it will shed light on the future of IP law in the UK and its relationship with generative AI, an area that is still very much in flux.

The final outcome of this case will have significant implications, both for AI companies and creators. The case raises important questions about the rights of creators in the age of AI and how these can be balanced against the need for data for AI training. As discussed in a separate blog, the UK Government's consultation on the interplay between copyright law and AI shows that it is hoping to support right holders while also facilitating AI development. Whether this can be achieved remains to be seen.

If you wish to discuss the impact of this decision on your business, please contact a member of the IP, Technology & Data team or your usual contact at Brodies.

Contributors

Damien Behan

Innovation & Technology Director

Monica Connolly

Legal Director

Regan Lambert

Trainee Solicitor