Daily Bulletin


The Conversation

  • Written by The Conversation
imageI'd buy that for a dollar. Or, just steal it from you.elbragon, CC BY

The fear of powerful artificial intelligence and technology is a popular theme, as seen in films such as Ex Machina, Chappie, and the Terminator series.

And we may soon find ourselves addressing fully autonomous technology with the capacity to cause damage. While this may be some form of military wardroid or law enforcement robot, it could equally be something not created to cause harm, but which could nevertheless do so by accident or error. What then? Who is culpable and liable when a robot or artificial intelligence goes haywire? Clearly, our way of approaching this doesn’t neatly fit into society’s view of guilt and justice.

While some may choose to dismiss this as too far into the future to concern us, remember that a robot has already been arrested for buying drugs. This also ignores how quickly technology can evolve. Look at the lessons from the past – many of us still remember the world before the internet, social media, mobile technology, GPS – even phones or widely available computers. These once-dramatic innovations developed into everyday technologies which have created difficult legal challenges.

A guilty robot mind?

How quickly we take technology for granted. But we should give some thought to the legal implications. One of the functions of our legal system is to regulate the behaviour of legal persons and to punish and deter offenders. It also provides remedies for those who have suffered, or are at risk of suffering harm.

Legal persons – humans, but also companies and other organisations for the purposes of the law – are subject to rights and responsibilities. Those who design, operate, build or sell intelligent machines have legal duties – what about the machines themselves? Our mobile phone, even with Cortana or Siri attached, does not fit the conventions for a legal person. But what if the autonomous decisions of their more advanced descendents in the future cause harm or damage?

Criminal law has two important concepts. First, that liability arises when harm has been or is likely to be caused by any act or omission. Physical devices such as Google’s driverless car, for example, clearly has the potential to harm, kill or damage property. Software also has the potential to cause physical harm, but the risks may extend to less immediate forms of damage such as financial loss.

Second, criminal law often requires culpability in the offender, what is known as the “guilty mind” or mens rea – the principle being that the offence, and subsequent punishment, reflects the offender’s state of mind and role in proceedings. This generally means that deliberate actions are punished more severely than careless ones. This poses a problem, in terms of treating autonomous intelligent machines under the law: how do we demonstrate the intentions of a non-human, and how can we do this within existing criminal law principles?

Robocrime?

This isn’t a new problem – similar considerations arise in trials of corporate criminality. Some thought needs to go into when, and in what circumstances, we make the designer or manufacturer liable rather than the user. Much of our current law assumes that human operators are involved.

For example, in the context of highways, the regulatory framework assumes that there is a human driver to at least some degree. Once fully autonomous vehicles arrive, that framework will require substantial changes to address the new interactions between human and machine on the road.

As intelligent technology that by-passes direct human control becomes more advanced and more widespread, these questions of risk, fault and punishment will become more pertinent. Film and television may dwell on the most extreme examples, but the legal realities are best not left to fiction.

The authors do not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article. They also have no relevant affiliations.

Authors: The Conversation

Read more http://theconversation.com/robot-law-what-happens-if-intelligent-machines-commit-crimes-44058

Writers Wanted

Kylie Moore-Gilbert has been released. But will a prisoner swap with Australia encourage more hostage-taking by Iran?

arrow_forward

Ancient Earth had a thick, toxic atmosphere like Venus – until it cooled off and became liveable

arrow_forward

Not just hot air: turning Sydney's wastewater into green gas could be a climate boon

arrow_forward

The Conversation
INTERWEBS DIGITAL AGENCY

Politics

Prime Minister Interview with Ben Fordham, 2GB

BEN FORDHAM: Scott Morrison, good morning to you.    PRIME MINISTER: Good morning, Ben. How are you?    FORDHAM: Good. How many days have you got to go?   PRIME MINISTER: I've got another we...

Scott Morrison - avatar Scott Morrison

Prime Minister Interview with Kieran Gilbert, Sky News

KIERAN GILBERT: Kieran Gilbert here with you and the Prime Minister joins me. Prime Minister, thanks so much for your time.  PRIME MINISTER: G'day Kieran.  GILBERT: An assumption a vaccine is ...

Daily Bulletin - avatar Daily Bulletin

Did BLM Really Change the US Police Work?

The Black Lives Matter (BLM) movement has proven that the power of the state rests in the hands of the people it governs. Following the death of 46-year-old black American George Floyd in a case of ...

a Guest Writer - avatar a Guest Writer

Business News

Nisbets’ Collab with The Lobby is Showing the Sexy Side of Hospitality Supply

Hospitality supply services might not immediately make you think ‘sexy’. But when a barkeep in a moodily lit bar holds up the perfectly formed juniper gin balloon or catches the light in the edg...

The Atticism - avatar The Atticism

Buy Instagram Followers And Likes Now

Do you like to buy followers on Instagram? Just give a simple Google search on the internet, and there will be an abounding of seeking outcomes full of businesses offering such services. But, th...

News Co - avatar News Co

Cybersecurity data means nothing to business leaders without context

Top business leaders are starting to realise the widespread impact a cyberattack can have on a business. Unfortunately, according to a study by Forrester Consulting commissioned by Tenable, some...

Scott McKinnel, ANZ Country Manager, Tenable - avatar Scott McKinnel, ANZ Country Manager, Tenable



News Co Media Group

Content & Technology Connecting Global Audiences

More Information - Less Opinion