MPs have recently been calling for content platforms like Facebook, YouTube, Twitter and Google to take action to actively remove offensive content from their networks that may be considered extremist or hateful.
News reports have commented on the comparatively swift action of social media platforms to remove materials that infringe intellectual property rights. Whilst the contrast in reality may not be so stark, this post considers some of the recent legal issues around instances of intermediary liability online relating to intellectual property ("IP") enforcement and protection and how they might help inform the debate over a provider's liability more generally and in relation to offensive content in particular.
Legal Solutions
Central to the issues is the removal of infringing content which has been uploaded to networks by users. The courts have been grappling with these issues in the context of IP infringement for many years and as a result there is now at least some clarity around the circumstances in which platforms like YouTube, eBay and Facebook are required to remove and/or may be liable for content uploaded by their users.
When it comes to online protection of intellectual property the main framework is set out in a number of pieces of UK and European and UK legislation. These include the E-Commerce Directive (and Regulations) (both together 'The Regulations') which apply to almost all commercial and publicly accessible websites.
The Regulations provide for certain limitations on a provider's liability when it comes to illegal/infringing activity taking place on the platform. In essence, these are that if it acts as a mere conduit, is caching, hosting it will be excused if upon obtaining "actual knowledge" or awareness of illegal activities, it acts expeditiously to remove or to disable access to the information concerned. The notion of 'actual knowledge' stops short of the monitoring that MPs are now calling for in relation to extremist and hate speech online. Up to now 'actual knowledge' has meant receiving some kind of notification/complaint from a rightsholder about infringing material.
In a recent decision, the Italian Courts held that YouTube was merely a passive, neutral host for content and able to take advantage of the above safe harbour regime. The Italian court held that a host could only be considered 'active' when it intervenes/modifies/takes part in the elaboration of the content hosted on its platform. This means unless there was some kind of modification of the video itself by YouTube, it would not be considered as having an active role.
The notion of 'actual knowledge' stops short of the monitoring that MPs are now calling for in relation to extremist and hate speech online. Up to now 'actual knowledge' has meant receiving some kind of notification or complaint from a rightsholder about infringing material.
Who should pay the cost of protection?
The question of who should pay for the related costs of implementing technical protection measures has been visited in the context of blocking injunctions cases.
In Cartier International AG & Ors v British Sky Broadcasting Ltd & Ors [2016] EWCA Civ 658(which is subject to an appeal to the Supreme Court at the time of writing) the court concluded that it was entirely reasonable, in the case of ISPs, to expect them to pay the costs associated with implementing mechanisms to block access to sites where infringing content had been made available. In its view the intermediaries make profits from the services which the operators of the target websites use to infringe the intellectual property rights of the rightholders, and the costs of implementing the order can therefore be regarded as a cost of carrying on the ISP's business.
Whilst the case was limited to the much narrower context of the technical measures required to be put in place to enforce an IP blocking injunction the decision provides some insight into the approach taken by the courts when considering the costs issues.
Technical Solutions
There are already technical solutions deployed in the real online world. For example YouTube's Content ID is an automated piece of software that scans material uploaded to the site for IP infringement by comparing it against a database of registered IP. The challenge will be how these types of systems can be used by online platform providers to address extremist and similar type speech, or indeed other types of content that may not belong on the network. For online technology companies, the question will be whether they are prepared to take on these burdens and associated risks such as becoming targets for claims about inappropriate censoring and freedom of speech.
Lessons learned
In some ways IP infringement might be considered to involve more of a bright line distinction between infringing and non-infringing content, as well as a less emotive context. While IP enforcement still has grey areas, when considering who should be in control of and the tests applicable for acceptable speech online there is scope for much more blurring of lines. If control is with the ISPs and their technical prowess they will hold significant power to decide how and whether to remove material as extreme or not which is ultimately a subjective decision.
The stakes are arguably much higher and not always just economic. All in all it may be more difficult than might first appear for the legislature to get the balance right.
Contributor
Partner