In what is said to be a world first, New Zealand has adopted a new package of measures to tackle harmful material on the internet. It is called the Harmful Digital Communications Act. Although many similar laws exist elsewhere, New Zealand’s approach differs from the typical practice of tweaking an existing system, or hurriedly putting in place a new criminal offence.
What constitutes the “right” legal response to phenomena such as trolling is not a straightforward matter. In New Zealand itself, the law was debated alongside an ongoing investigation into sexual assault, after a group of young men used Facebook to brag about having sex with underage girls, in what became known as the Roast Busters scandal. The role of social media in legal proceedings has gained more and more attention, especially in light of recent high profile cases around the publication of intimate images without consent (sometimes termed “revenge pornography”).
New Zealand’s new law
The law aims to deter, prevent and mitigate serious emotional distress resulting from digital communications, and to provide victims with “quick and efficient” redress. What’s particularly interesting about this work is how the parliament has tried to tackle these issues with a combination of different legal “tools”; by introducing new laws, establishing an agency to oversee complaints and amending existing legislation.
The package includes both civil remedies and criminal offences. Where there are serious or repeated issues, applications can be made to a court for various orders, such as corrections, apologies, and removing material from the web. Furthermore, posting information – whether true or untrue – or intimate recordings with the intention to cause someone harm can be punished with a prison sentence of up to two years, or a fine of up to £21,000. Companies involved can also be fined up to £85,000.
It also provides for an “approved” body to receive, investigate and attempt to resolve complaints about harmful communications, and – among other things – provide “education and advice” regarding online safety. This agency will be guided by a set of ten principles, which have been written into the law in an effort to set out what digital communications should not do: for example, they should not “disclose sensitive personal facts about an individual” or “be used to harass an individual”.
Which body will carry out this role is yet to be confirmed, although New Zealand’s Law Commission has recommended that NetSafe – a not-for-profit organisation that provides education and support in relation to cybersafety – should do it.
The act also amends existing laws about harassment. Now, uploading something and leaving it there could constitute harassment. This deals with a potential problem of the old law, where a one-off upload would probably not constitute an offence, since harassment is often defined as being a course of conduct.
Who is responsible?
One of the more controversial parts of the new act (as is often the case with laws relating to the internet) is the position of intermediaries – that is, companies that provide access to the internet or host content, without necessarily producing it themselves. Web hosts, such as social networking sites, are subject to the law, but can protect themselves against action through a “notice and takedown” system for allegedly harmful material.
Systems like this are most commonly found in copyright law, where the duties of intermediaries are substantial. Internet service providers can be required to block access to sites like The Pirate Bay, and hosts like YouTube have detailed systems for notifying and dealing with complaints. Intermediary liability laws are in place around the world, although reforms to libel law in England and Wales in 2013 have included a new, more host-friendly approach specifically for allegations of libel.
As with the English reforms, the goal in New Zealand is to encourage hosts to pass on the complaint to whoever uploaded the material, and give them a chance to take it down or justify it. Some hosts, including Twitter and Reddit, are already taking a more proactive approach to dealing with complaints and problematic material.
But takedown systems continue to provoke criticism, on the grounds that they encourage risk-averse hosts to take down too much, which is claimed to hinder freedom of expression. Many tech companies made submissions to the NZ parliament, questioning whether new laws were really necessary.
It’s hard to say whether New Zealand’s parliament has struck the right balance between ensuring freedom of speech and protecting citizens from harm. A small number of MPs voted against the law, and others gave reluctant support, questioning whether the bill was properly targeted.
But a key point is that these measures were introduced only after a good deal of discussion. Law Commission projects on regulatory gaps, privacy and the specific issue of harmful communications all reviewed practices and evidence from around the world. The vibrant debate on the proposal, and the resulting innovative measures, should encourage other nations to give ideas for “anti-trolling laws” proper and careful attention.
For instance, a comprehensive look at internet communications is long overdue in England and Wales. A House of Lords committee did some of this work last year in a report on social media and the criminal law. The committee responded to calls for action on revenge pornography, by suggesting that existing laws were up to the job. But the UK parliament nonetheless opted to implement a new and more specific criminal offence, after limited scrutiny.
Over in Ireland, the Law Reform Commission is investigating the potential of a bill against cyberbullying. Already, members of the Irish Senate have started to discuss related issues such as abusive Tweets and freedom of speech. And various states in the USA are already adopting new laws, while discussions about the online harassment of women rage on.
Whether or not New Zealand has got it right, there’s a lot that each of these countries can learn from the ideas that have been floated there. And how the different tools in New Zealand’s trolling toolbox work together will be of particular interest over the coming years.
Daithí Mac Síthigh receives funding from CREATe (www.create.ac.uk), funded by the UK research councils. He has carried out work for Google (not in relation to any of the legal issues discussed in this article).
Authors: The Conversation